-
-Abjad: A Writing System That Only Uses Consonants
-Have you ever wondered how some languages can be written without vowels? How do people read and write such languages? What are the advantages and disadvantages of using such a writing system? In this article, we will explore the fascinating world of abjads, a type of writing system that only uses consonants.
-An abjad is a writing system in which only consonants are represented, leaving vowel sounds to be inferred by the reader. This contrasts with other alphabets, which provide graphemes for both consonants and vowels. The term abjad was introduced in 1990 by Peter T. Daniels, a linguist who studied different types of writing systems. He derived the word from the first four letters of the Arabic alphabet: alif, ba, jim, and dal.
-abjad Download File ——— https://jinyurl.com/2uNRdk
-Abjads are mainly used in languages that belong to the Afro-Asiatic language family, such as Arabic, Hebrew, Amharic, etc. These languages have a feature called consonantal roots, which means that the meaning of a word is determined by its consonants, while the vowels indicate grammatical variations. For example, in Arabic, the root k-t-b means "write", while different vowel patterns can form words such as kataba (he wrote), kitab (book), kutub (books), etc.
-Abjads are not only interesting from a linguistic perspective but also from a historical and cultural one. They have been used for thousands of years to record some of the most ancient and influential civilizations and religions in human history. They have also influenced other writing systems and contributed to the development of science, literature, art, and more.
-The History of Abjads
-Abjads are one of the oldest types of writing systems in the world. They originated from pictographic and cuneiform scripts that were used by ancient civilizations in Mesopotamia and Egypt. These scripts consisted of symbols that represented objects, actions, or sounds. However, over time, these symbols became simplified and abstracted, and only the consonantal sounds were retained. This led to the emergence of the first abjads, such as Ugaritic, Phoenician, Aramaic, and Hebrew.
-The earliest known abjad is the Ugaritic script, which was used to write the Ugaritic language, a Northwest Semitic language spoken in the city-state of Ugarit (modern-day Syria) from around 1400 to 1200 BCE. The Ugaritic script consisted of 30 letters, each representing a consonant. It was written from left to right on clay tablets using a stylus.
-The most influential abjad in history is the Phoenician script, which was used to write the Phoenician language, a Canaanite language spoken by the Phoenicians, a seafaring people who lived in the eastern Mediterranean region from around 1500 to 300 BCE. The Phoenician script consisted of 22 letters, each representing a consonant. It was written from right to left on various materials such as stone, metal, wood, or parchment.
-abjad writing system
-abjad vs alphabet
-abjad vs abugida
-abjad notation
-abjad numerals
-abjad arabic
-abjad hebrew
-abjad phoenician
-abjad python
-abjad music
-abjad examples
-abjad history
-abjad origin
-abjad definition
-abjad app
-abjad books
-abjad characters
-abjad chart
-abjad diacritics
-abjad etymology
-abjad fonts
-abjad gematria
-abjad generator
-abjad hindi
-abjad in english
-abjad in urdu
-abjad in persian
-abjad in malayalam
-abjad in turkish
-abjad keyboard
-abjad letters
-abjad lilypond
-abjad meaning
-abjad names
-abjad online
-abjad order
-abjad pronunciation
-abjad pdf
-abjad quizlet
-abjad reading
-abjad script
-abjad symbols
-abjad translation
-abjad tutorial
-abjad unicode
-abjad vowels
-abjad words
-what is an example of an impure or incomplete or defective or partial phonemic script or segmentally linear defective phonographic script or consonantary or consonant writing or consonantal alphabet?
-The Phoenician script was widely adopted and adapted by other peoples and cultures, giving rise to many other writing systems, such as Greek, Latin, Arabic, Hebrew, and more. Some of these writing systems added vowel symbols to the Phoenician script, creating alphabets, while others retained the abjad structure but modified the shapes and sounds of the letters.
- The Phoenician Abjad
-The Phoenician abjad is considered to be the ancestor of many modern writing systems. It was developed by the Phoenicians, a maritime civilization that dominated trade and commerce in the ancient Mediterranean world. The Phoenicians used their script to record their history, culture, religion, and business transactions. They also spread their script to other regions through their trade contacts and colonies.
-The Phoenician abjad consisted of 22 letters, each representing a consonant sound. The letters were named after objects that started with that sound. For example, the letter aleph (?) represented the sound /ʔ/ (a glottal stop) and was named after an ox (ʾālep), because the shape of the letter resembled an ox's head. The letter beth (?) represented the sound /b/ and was named after a house (bayt), because the shape of the letter resembled a house.
-The Phoenician abjad was written from right to left in horizontal lines. The letters were usually written without any spaces or punctuation marks between them. The vowel sounds were not written but inferred by the reader based on the context and the consonantal roots. The direction of writing sometimes changed depending on the medium or the purpose. For example, some inscriptions were written in boustrophedon style, which means "as the ox plows", alternating between right-to-left and left-to-right lines.
-The Phoenician abjad had a significant impact on other writing systems and languages. It was adopted and adapted by many peoples and cultures in different regions and times. Some of these adaptations include:
-
-The Greek alphabet: The Greeks borrowed the Phoenician abjad around the 9th century BCE and added vowel symbols to it, creating an alphabet that could represent all the sounds of their language. The Greek alphabet also changed the direction of writing from right-to-left to left-to-right.
-The Latin alphabet: The Latin alphabet is derived from an Etruscan adaptation of the Greek alphabet, which in turn was derived from a western variant of the Phoenician abjad. The Latin alphabet was used to write Latin, the language of ancient Rome, and later became the basis for many modern alphabets such as English, French, Spanish, etc.
-the Phoenician abjad. The Arabic abjad is used to write Arabic, the language of Islam and one of the most widely spoken languages in the world. The Arabic abjad has 28 letters, each representing a consonant sound. The letters have different shapes depending on their position in a word (initial, medial, final, or isolated). The Arabic abjad also uses diacritical marks to indicate vowel sounds, but they are usually omitted in most texts.
-The Hebrew abjad: The Hebrew abjad is derived from a variant of the Phoenician abjad. The Hebrew abjad is used to write Hebrew, the language of Judaism and the official language of Israel. The Hebrew abjad has 22 letters, each representing a consonant sound. Some of the letters can also represent vowel sounds depending on their position or context. The Hebrew abjad also uses diacritical marks called niqqud to indicate vowel sounds, but they are usually omitted in most texts.
-
- The Arabic Abjad
-The Arabic abjad is the most widely used abjad in the world today. It is used to write Arabic, the official language of 26 countries and a co-official language in six others. Arabic is also the liturgical language of Islam, the religion of about 1.8 billion Muslims worldwide. The Arabic abjad is also used to write other languages that use Arabic script, such as Persian, Urdu, Pashto, etc.
-The Arabic abjad consists of 28 letters, each representing a consonant sound. The letters are written from right to left in horizontal lines. The letters have different shapes depending on their position in a word: initial (at the beginning), medial (in the middle), final (at the end), or isolated (standing alone). For example, the letter ba (ب) has four different shapes: ـب (final), بـ (initial), ـبـ (medial), and ب (isolated).
-The Arabic abjad does not represent vowel sounds explicitly, but it uses diacritical marks called harakat to indicate them. These marks are placed above or below the consonant letters and can change the meaning and pronunciation of a word. For example, the word kataba (he wrote) is written as كَتَبَ with three harakat: a fatha (a short /a/ sound) above the first and second letters, and a sukun (no vowel sound) above the third letter. However, these marks are usually omitted in most texts, except for religious texts, children's books, dictionaries, or texts for learners.
-The Arabic abjad also has other symbols and signs that modify or enhance the letters and words. Some of these include:
-
-The hamza (ء), which represents a glottal stop sound (/ʔ/). It can appear alone or with a carrier letter such as alif (ا), waw (و), or ya (ي).
-The shadda (ّ), which represents a gemination or doubling of a consonant sound. It is placed above a letter and indicates that it is pronounced twice. For example, the word madrasa (school) is written as مَدْرَسَة with a shadda above the letter sad (ص), indicating that it is pronounced as /madras.sa/.
-The tanwin (ـً ـٍ ـٌ), which represents an /n/ sound added to the end of a word in certain grammatical cases. It consists of a haraka followed by an alif maksura (ى), which looks like a short tail. For example, the word kitabun (a book) is written as كِتَابٌ with a kasra (a short /i/ sound) below the first letter and a tanwin with a damma (a short /u/ sound) above the last letter.
-The alif maqsura (ى), which represents a long /a/ sound at the end of a word. It looks like an alif without a hamza or a dotless ya. For example, the word layla (night) is written as لَيْلَى with an alif maqsura at the end.
-The alif lam (ال), which represents the definite article "the" in Arabic. It consists of an alif followed by a lam and is attached to the beginning of a word. For example, the word kitab (book) becomes al-kitab (the book) when written with an alif lam.
-
- The Hebrew abjad is the writing system of the Hebrew language, the language of Judaism and the official language of Israel. The Hebrew abjad is also used to write other Jewish languages, such as Yiddish, Ladino, Judeo-Arabic, etc. The Hebrew abjad has a long and rich history, dating back to the 10th century BCE. It has been used to record some of the most sacred and influential texts in human history, such as the Torah, the Talmud, and the Kabbalah.
-The Hebrew abjad consists of 22 letters, each representing a consonant sound. The letters are written from right to left in horizontal lines. The letters have different shapes depending on their position in a word: regular (in most cases), final (at the end of a word), or medial (in some cases). For example, the letter kaf (כ) has two shapes: ך (final) and כ (regular or medial).
-The Hebrew abjad does not represent vowel sounds explicitly, but it uses diacritical marks called niqqud to indicate them. These marks are placed below or above the consonant letters and can change the meaning and pronunciation of a word. For example, the word shalom (peace) is written as שָׁלוֹם with four niqqud: a kamatz (a long /a/ sound) below the first letter, a shva (no vowel sound) below the second letter, a holam (a long /o/ sound) above the third letter, and a dagesh (a dot that indicates gemination or doubling of a consonant sound) inside the fourth letter. However, these marks are usually omitted in most texts, except for religious texts, children's books, dictionaries, or texts for learners.
-The Hebrew abjad also has other symbols and signs that modify or enhance the letters and words. Some of these include:
-
-The alef (א), which represents a glottal stop sound (/ʔ/) or a silent letter that serves as a placeholder for a vowel sound. It can also indicate a long vowel sound when combined with other letters.
-The vav (ו), which represents a consonant sound (/v/) or a vowel sound (/u/ or /o/). It can also indicate a long vowel sound when combined with other letters.
-The yod (י), which represents a consonant sound (/j/) or a vowel sound (/i/ or /e/). It can also indicate a long vowel sound when combined with other letters.
-The he (ה), which represents a consonant sound (/h/) or a silent letter that serves as an indicator of grammatical gender or number. It can also indicate a long vowel sound when combined with other letters.
-The geresh (׳), which represents a modification of a consonant sound or an abbreviation of a word. For example, the letter gimel (ג) with a geresh becomes ג׳ and represents the sound /ʒ/ (as in measure). The letter shin (ש) with a geresh becomes ש׳ and represents an abbreviation of the word shekel (שֶׁקֶל), the currency of Israel.
-The gershayim (״), which represents an abbreviation of a word or a quotation mark. For example, the letters alef and lamed with gershayim become א״ל and represent an abbreviation of the word aluf (אַלּוּף), meaning general or chief. The gershayim can also be used to enclose a quotation within a text.
-
- Other Abjads
-Besides Phoenician, Arabic, and Hebrew, there are other abjads that have been used to write various languages in different regions and times. Some of these abjads include:
-
-The Ugaritic abjad: As mentioned earlier, this is the earliest known abjad that was used to write the Ugaritic language in ancient Syria. It had 30 letters and was written from left to right on clay tablets.
-The Syriac abjad: This is a descendant of the Aramaic abjad that was used to write the Syriac language, a dialect of Aramaic that was spoken by Christians in the Middle East from the 4th to the 8th centuries CE. It had 22 letters and was written from right to left on parchment or paper. It also had vowel marks and other symbols to indicate pronunciation and grammar.
-The Ge'ez abjad: This is an adaptation of the South Arabian abjad that was used to write the Ge'ez language, an ancient Semitic language that was spoken in Ethiopia and Eritrea until the 10th century CE. It had 26 letters and was written from left to right on parchment or stone. It also had vowel marks that were attached to the consonant letters, creating syllabic symbols.
-The Brahmi abjad: This is an adaptation of the Aramaic abjad that was used to write various languages in ancient India, such as Sanskrit, Prakrit, Pali, etc. It had 33 letters and was written from left to right on stone, metal, or palm leaves. It also had vowel marks that were attached to the consonant letters, creating syllabic symbols.
-
- The Advantages and Disadvantages of Abjads
-Abjads are a unique and fascinating type of writing system, but they also have their pros and cons. Depending on the language, the context, and the purpose, abjads can offer some benefits and drawbacks compared to other writing systems. Here are some of them:
- Advantages of Abjads
-Some of the advantages of using abjads are:
-
-They can save space and time: Abjads can be more compact and concise than other writing systems, as they only use consonant letters and omit vowel marks. This can save space on writing materials and time for writing and reading.
-They can preserve meaning and ambiguity: Abjads can preserve the meaning of words by focusing on their consonantal roots, which are usually more stable and consistent than their vowel patterns. This can also allow for some intentional ambiguity or flexibility in interpretation, which can be useful for poetry, rhetoric, or humor.
-They can reflect linguistic features: Abjads can reflect some linguistic features of the languages they are used for, such as consonantal roots, morphological patterns, phonetic variations, etc. This can make them more suitable and natural for representing these languages than other writing systems.
-
- Disadvantages of Abjads
-Some of the disadvantages of using abjads are:
-
-They can cause ambiguity and confusion: Abjads can cause ambiguity and confusion for readers and learners, as they do not provide clear information about vowel sounds, which can change the meaning and pronunciation of words. This can make it difficult to read unfamiliar words, names, or foreign terms.
-They can require memorization and inference: Abjads can require memorization and inference for readers and learners, as they have to rely on their knowledge of the language, the context, and the conventions to infer the vowel sounds and meanings of words. This can make it challenging to learn and master these writing systems.
-They can limit communication and expression: Abjads can limit communication and expression for writers and speakers, as they do not allow for precise and accurate representation of vowel sounds, which can convey nuances, emotions, tones, etc. This can make it hard to express oneself clearly and effectively in these writing systems.
-
- Abjads are a type of writing system that only uses consonants, leaving vowel sounds to be inferred by the reader. Alphabets are another type of writing system that uses both consonants and vowels, providing graphemes for all the sounds of a language. How do abjads and alphabets differ in terms of structure, function, and usage? Let's find out.
- The Definition of Alphabets
-An alphabet is a writing system in which each letter represents a phoneme, a basic unit of sound in a language. An alphabet usually consists of two types of letters: consonants and vowels. Consonants are letters that represent sounds that are produced by obstructing or constricting the airflow in the vocal tract, such as /b/, /k/, /s/, etc. Vowels are letters that represent sounds that are produced by vibrating the vocal cords without any obstruction or constriction, such as /a/, /i/, /u/, etc.
-An alphabet can represent all the sounds of a language with a relatively small number of letters, usually between 20 and 30. This makes it easier to learn and use than other writing systems that have more complex or numerous symbols, such as logographic or syllabic systems. An alphabet can also allow for more accurate and consistent spelling and pronunciation of words, as each letter corresponds to a specific sound.
- The Contrast of Abjads and Alphabets
-Abjads and alphabets are both types of writing systems that use letters to represent sounds, but they differ in how they treat vowel sounds. Abjads only represent consonant sounds, leaving vowel sounds to be inferred by the reader based on the context and the consonantal roots. Alphabets represent both consonant and vowel sounds, providing graphemes for all the phonemes of a language.
-This difference has implications for the structure, function, and usage of these writing systems. Abjads tend to be more compact and concise than alphabets, as they only use consonant letters and omit vowel marks. However, abjads also tend to be more ambiguous and confusing than alphabets, as they do not provide clear information about vowel sounds, which can change the meaning and pronunciation of words. Abjads also tend to reflect some linguistic features of the languages they are used for, such as consonantal roots, morphological patterns, phonetic variations, etc. Alphabets tend to be more precise and consistent than abjads, as they provide graphemes for all the sounds of a language. However, alphabets also tend to be more complex and diverse than abjads, as they have different letters and rules for different languages.
- The Examples of Alphabets
-Some of the most common alphabets in the world are:
-
-The Latin alphabet: This is the most widely used alphabet in the world today. It is used to write many languages such as English, French, Spanish, German, Italian, etc. It has 26 letters: 21 consonants and 5 vowels.
-The Greek alphabet: This is the alphabet that was derived from the Phoenician abjad by adding vowel symbols. It is used to write Greek, the official language of Greece and Cyprus. It has 24 letters: 17 consonants and 7 vowels.
-The Cyrillic alphabet: This is an adaptation of the Greek alphabet that was created by Saint Cyril and Saint Methodius in the 9th century CE to write Slavic languages. It is used to write many languages such as Russian, Ukrainian, Bulgarian, Serbian, etc. It has 33 letters: 21 consonants and 12 vowels.
-The Devanagari alphabet: This is an adaptation of the Brahmi abjad that was developed in India around the 10th century CE to write Sanskrit and other languages. It is used to write many languages such as Hindi, Nepali, Marathi, etc. It has 47 letters: 33 consonants and 14 vowels.
-
- In this article, we have learned about abjads, a type of writing system that only uses consonants. We have explored the history of abjads, their advantages and disadvantages compared to other writing systems, and how they differ from alphabets. We have also seen some examples of abjads and alphabets that are used to write various languages in the world.
-Abjads are a fascinating and unique way of writing that reflect the linguistic and cultural features of the languages they are used for. They have been used for thousands of years to record some of the most ancient and influential civilizations and religions in human history. They have also influenced other writing systems and contributed to the development of science, literature, art, and more.
-If you are interested in learning more about abjads or other writing systems, you can visit some of the following websites:
-
-[Omniglot]: A website that provides information and examples of various writing systems and languages.
-[ScriptSource]: A website that provides resources and tools for studying, using, and developing writing systems.
-[Ancient Scripts]: A website that provides an introduction to different ancient writing systems and their evolution.
-
-We hope you enjoyed reading this article and learned something new. If you have any questions or comments, please feel free to share them with us. Thank you for your time and attention.
- FAQs About Abjads
-Here are some frequently asked questions about abjads and their answers:
-
-What is the difference between an abjad and an abugida?
-An abjad is a writing system that only represents consonant sounds, leaving vowel sounds to be inferred by the reader. An abugida is a writing system that represents consonant sounds with letters and vowel sounds with diacritical marks that are attached to the consonant letters, creating syllabic symbols. For example, Arabic is an abjad, while Ge'ez is an abugida.
-What is the difference between an alphabet and a syllabary?
-An alphabet is a writing system that uses letters to represent phonemes, basic units of sound in a language. An alphabet usually consists of two types of letters: consonants and vowels. A syllabary is a writing system that uses symbols to represent syllables, units of sound that consist of one or more phonemes. A syllabary usually has more symbols than an alphabet, as each symbol represents a different combination of consonants and vowels. For example, Latin is an alphabet, while Japanese is a syllabary.
-
What is the difference between a script and a language?
-A script is a system of symbols that are used to write one or more languages. A language is a system of communication that consists of sounds, words, grammar, etc. A script can be used to write different languages, and a language can be written in different scripts. For example, the Latin script is used to write many languages such as English, French, Spanish, etc. The English language can be written in different scripts such as Latin, Braille, Morse code, etc.
-
What are some of the benefits of learning different writing systems?
-Learning different writing systems can have many benefits for personal and professional development. Some of these benefits include:
-
-Enhancing cognitive skills: Learning different writing systems can improve memory, attention, creativity, problem-solving, etc.
-Expanding cultural knowledge: Learning different writing systems can increase awareness and appreciation of different cultures, histories, religions, etc.
-Improving communication skills: Learning different writing systems can improve reading, writing, speaking, listening, etc.
-Boosting career opportunities: Learning different writing systems can open up new possibilities for education, work, travel, etc.
-
-How can I learn different writing systems?
-There are many ways to learn different writing systems depending on your goals, preferences, and resources. Some of these ways include:
-
-Taking online courses: There are many online platforms that offer courses on different writing systems and languages.
-Using apps or software: There are many apps or software that provide interactive and engaging tools for learning different writing systems and languages.
-Reading books or articles: There are many books or articles that provide information and examples of different writing systems and languages.
-Watching videos or podcasts: There are many videos or podcasts that provide visual and auditory explanations and demonstrations of different writing systems and languages.
-Joining communities or groups: There are many communities or groups that provide opportunities and support for learning different writing systems and languages.
-Practicing and applying: There are many ways to practice and apply what you have learned, such as writing, reading, speaking, listening, etc.
- 401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Crowd Evolution Mod APK The Ultimate Crowd Simulation Game with Amazing Graphics.md b/spaces/1phancelerku/anime-remove-background/Crowd Evolution Mod APK The Ultimate Crowd Simulation Game with Amazing Graphics.md
deleted file mode 100644
index e07d6672559f6aa6f6187032a6bb28799572acb9..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Crowd Evolution Mod APK The Ultimate Crowd Simulation Game with Amazing Graphics.md
+++ /dev/null
@@ -1,117 +0,0 @@
-
-Crowd Evolution APK Mod Download: A Fun and Addictive Game for Android Users
- Do you love games that let you build your own army, fight against other crowds, and travel through different time periods? If so, you should check out Crowd Evolution, a fun and addictive game for Android devices. In this game, you can grow and evolve your crowd, equip them with various weapons and items, and defeat your enemies in exciting battles. You can also download the Crowd Evolution APK mod to get unlimited money, gems, and no ads. In this article, we will tell you more about this game, its features, why you should download the mod, how to install it, and some tips and tricks to help you play better.
-crowd evolution apk mod download Download ✓✓✓ https://jinyurl.com/2uNJ9p
- What is Crowd Evolution?
- Crowd Evolution is a game developed by Rollic Games, a popular studio that has created many other hit games such as Tangle Master 3D, Go Knots 3D, Picker 3D, and more. Crowd Evolution is a game that combines elements of action, strategy, simulation, and arcade. It has a simple premise: you start with a small crowd of people, and you have to run around the map to recruit more followers, avoid or fight other crowds, and reach the end of the level. Along the way, you will also pass through different gates that will either increase or decrease your crowd size, time period, or weapon type. The game has hundreds of levels to play, each with different challenges and environments.
- A game about growing and evolving your crowd
- One of the main aspects of Crowd Evolution is growing and evolving your crowd. You start with a few people, but you can add more by running into them or by passing through green gates. The more people you have in your crowd, the stronger you will be in combat. You can also evolve your crowd by upgrading their stats such as health, damage, fire rate, speed, etc. You can do this by spending coins that you earn from completing levels or by watching videos. Evolving your crowd will make them more powerful and resilient against enemies.
- A game about fighting and defeating your enemies
- Another aspect of Crowd Evolution is fighting and defeating your enemies. You will encounter many other crowds on your way to the end of the level, some of them bigger or smaller than yours. You can either avoid them or engage them in combat. If you choose to fight them, you will have to use your weapons and items to shoot them down or knock them off the map. You can also use traps or obstacles to hinder their progress. Fighting enemies will earn you more coins and gems, which you can use to buy new weapons or items.
- A game about time travel and different eras
- The last aspect of Crowd Evolution is time travel and different eras. As you play the game, you will notice that there are different gates that will change the time period of your crowd. You can travel from the Stone Age to the Medieval Age, from the Industrial Age to the Modern Age, and even to the Future Age. Each era has its own weapons and items that you can use, such as clubs, swords, guns, lasers, etc. You can also see the changes in the environment and the enemies as you travel through time. Time travel adds more variety and fun to the game, as you can experience different scenarios and challenges.
- What are the features of Crowd Evolution?
- Crowd Evolution is a game that has many features that make it enjoyable and addictive. Here are some of them:
-crowd evolution mod apk latest version
-crowd evolution hack apk download
-crowd evolution mod menu apk
-crowd evolution unlimited money apk
-crowd evolution mod apk android 1
-crowd evolution mod apk revdl
-crowd evolution mod apk free download
-crowd evolution mod apk no ads
-crowd evolution mod apk unlimited gems
-crowd evolution mod apk offline
-crowd evolution mod apk 2023
-crowd evolution mod apk happymod
-crowd evolution mod apk rexdl
-crowd evolution mod apk all items unlocked
-crowd evolution mod apk unlimited coins
-crowd evolution premium mod apk
-crowd evolution pro mod apk
-crowd evolution vip mod apk
-crowd evolution full mod apk
-crowd evolution mega mod apk
-crowd evolution cracked apk download
-crowd evolution cheat apk download
-crowd evolution unlocked apk download
-crowd evolution paid apk download
-crowd evolution patched apk download
-crowd evolution hack mod download
-crowd evolution hack version download
-crowd evolution hack tool download
-crowd evolution hack online download
-crowd evolution hack generator download
-crowd evolution android mod download
-crowd evolution ios mod download
-crowd evolution pc mod download
-crowd evolution windows mod download
-crowd evolution mac mod download
-crowd evolution linux mod download
-crowd evolution arcade game mod download
-crowd evolution simulation game mod download
-crowd evolution casual game mod download
-crowd evolution fun game mod download
-crowd evolution free game mod download
-crowd evolution best game mod download
-crowd evolution new game mod download
-crowd evolution latest game mod download
-crowd evolution update game mod download
-crowd evolution 2023 game mod download
-how to download crowd evolution mod apk
- Weapons and equipment of different eras
- As mentioned before, Crowd Evolution lets you use different weapons and items depending on the time period of your crowd. You can equip your crowd with clubs, spears, axes, swords, shields, bows, arrows, guns, grenades, rockets, lasers, plasma guns, and more. Each weapon has its own advantages and disadvantages, such as range, damage, fire rate, accuracy, etc. You can also use items such as helmets, armor, boots, jetpacks, etc. to enhance your crowd's performance. You can buy new weapons and items with coins or gems, or find them on the map.
- Upgrade your crowd and unlock new abilities
- Crowd Evolution also lets you upgrade your crowd and unlock new abilities that will help you in your journey. You can upgrade your crowd's stats such as health, damage, fire rate, speed, etc. by spending coins. You can also unlock new abilities such as double jump, dash, freeze time, etc. by spending gems. Upgrading your crowd and unlocking new abilities will make them more powerful and versatile against enemies.
- Diverse levels and environments
- Crowd Evolution has hundreds of levels to play, each with different objectives and challenges. Some levels require you to reach the end of the map with a certain number of people in your crowd. Some levels require you to defeat a boss or a rival crowd. Some levels require you to collect a certain amount of coins or gems. Some levels require you to survive for a certain amount of time. Each level also has different environments that match the time period of your crowd. You can see forests, deserts, castles, cities, factories, spaceships, etc. Each environment also has different traps and obstacles that you have to avoid or use to your advantage.
- Simple and intuitive controls
- Crowd Evolution has simple and intuitive controls that make it easy to play. You just have to swipe on the screen to move your crowd around the map. You can also tap on the screen to shoot your weapons or use your items. The game also has an auto-aim feature that helps you target your enemies more easily. The controls are responsive and smooth, making the game fun and satisfying.
- Colorful and cartoonish graphics
- Crowd Evolution has colorful and cartoonish graphics that make it appealing and attractive. The game has a bright and vibrant color scheme that suits the mood and theme of the game. The game also has a cute and funny art style that makes the characters and enemies look adorable and hilarious. The game also has smooth animations and effects that add more life and charm to the game.
- Why download the Crowd Evolution APK mod?
- Crowd Evolution is a free-to-play game that you can download from the Google Play Store or the App Store. However, if you want to enjoy the game more fully and without any limitations or interruptions, you should download the Crowd Evolution APK mod. The APK mod is a modified version of the game that gives you some extra benefits and features that are not available in the original version. Here are some of the reasons why you should download the Crowd Evolution APK mod:
- Unlimited money and gems
- One of the main reasons to download the Crowd Evolution APK mod is that it gives you unlimited money and gems. Money and gems are the two currencies in the game that you can use to buy new weapons, items, upgrades, and abilities. However, in the original version of the game, you have to earn them by completing levels, watching videos, or spending real money. This can be time-consuming, boring, or expensive. With the Crowd Evolution APK mod, you don't have to worry about that. You will have unlimited money and gems from the start, and you can spend them as much as you want without running out. This way, you can buy and unlock everything in the game without any hassle or restriction.
- No ads and no interruptions
- Another reason to download the Crowd Evolution APK mod is that it removes all the ads and interruptions from the game. Ads are annoying and distracting, especially when they pop up in the middle of your gameplay or when you are trying to enjoy the game. They can also slow down your device or consume your data. In the original version of the game, you have to watch ads to get extra rewards or to access some features. With the Crowd Evolution APK mod, you don't have to do that. The mod removes all the ads from the game, and you can play without any interruption or disturbance. You can also access all the features without watching any videos.
- Easy installation and compatibility
- The last reason to download the Crowd Evolution APK mod is that it is easy to install and compatible with most Android devices. The mod does not require any root access or special permissions to install. You just have to download the APK file from a trusted source, enable unknown sources on your device settings, locate the downloaded file and tap on it to install, and launch the game and enjoy. The mod also works on most Android devices, regardless of their model or version. The mod is also updated regularly to ensure its functionality and security.
- How to download and install the Crowd Evolution APK mod?
- If you are convinced by the reasons above and want to download and install the Crowd Evolution APK mod, here are the steps that you need to follow:
- Step 1: Download the APK file from a trusted source
- The first step is to download the APK file from a trusted source. You can find many websites that offer the Crowd Evolution APK mod for free, but not all of them are safe or reliable. Some of them may contain viruses, malware, or spyware that can harm your device or steal your personal information. To avoid that, you should only download the APK file from a trusted source that has positive reviews and feedback from other users. You can also scan the file with an antivirus program before installing it.
- Step 2: Enable unknown sources on your device settings
- The second step is to enable unknown sources on your device settings. This is necessary because Android devices normally do not allow installing apps from sources other than the Google Play Store or the App Store. To install the Crowd Evolution APK mod, you have to enable unknown sources on your device settings. To do this, you have to go to your device settings, find the security or privacy option, and look for the unknown sources option. Then, you have to toggle it on or check the box to allow installing apps from unknown sources. This will enable you to install the APK file that you downloaded.
- Step 3: Locate the downloaded file and tap on it to install
- The third step is to locate the downloaded file and tap on it to install. After you have downloaded the APK file and enabled unknown sources, you have to find the file on your device storage. You can use a file manager app or your device's default file explorer to do this. You have to look for the folder where you saved the APK file, usually the downloads folder. Then, you have to tap on the file to start the installation process. You may see a pop-up window asking for your confirmation or permission to install the app. You have to tap on install or allow to proceed with the installation.
- Step 4: Launch the game and enjoy
- The fourth and final step is to launch the game and enjoy. After you have installed the Crowd Evolution APK mod, you will see a new icon on your device's home screen or app drawer. You have to tap on the icon to launch the game and start playing. You will notice that you have unlimited money and gems, no ads, and all the features unlocked from the start. You can now enjoy the game without any limitations or interruptions.
- Tips and tricks for playing Crowd Evolution
- Crowd Evolution is a game that is easy to play but hard to master. It requires some skills and strategies to complete all the levels and defeat all the enemies. Here are some tips and tricks that can help you play better and have more fun:
- Check the gates and choose the best one
- As you play the game, you will encounter different gates that will change your crowd size, time period, or weapon type. Some of these gates are beneficial, while some of them are detrimental. You should always check the gates before passing through them and choose the best one for your situation. For example, if you have a small crowd, you should look for a green gate that will increase your crowd size. If you have a weak weapon, you should look for a gate that will change your weapon type to a stronger one. If you are in a dangerous era, you should look for a gate that will take you to a safer one.
- Upgrade smartly and balance your stats
- Crowd Evolution also lets you upgrade your crowd's stats such as health, damage, fire rate, speed, etc. by spending coins. You should always upgrade your crowd smartly and balance your stats according to your needs and preferences. For example, if you want to have a fast and agile crowd, you should focus on upgrading your speed and fire rate. If you want to have a durable and resilient crowd, you should focus on upgrading your health and damage. You should also avoid upgrading only one stat and neglecting the others, as this will make your crowd unbalanced and vulnerable.
- Kill as many enemies as you can to earn more cash
- Crowd Evolution also lets you kill enemies by shooting them with your weapons or knocking them off the map. You should always try to kill as many enemies as you can, as this will earn you more cash that you can use to buy new weapons, items, upgrades, and abilities. Killing enemies will also reduce their crowd size and make them easier to defeat. You can also use traps or obstacles to kill enemies more efficiently and creatively.
- Push the buttons to activate traps on your foes
- Crowd Evolution also has some levels that have buttons that you can push to activate traps on your foes. These traps can be spikes, saws, lasers, bombs, etc. that can damage or kill your enemies instantly. You should always look for these buttons and push them when you see a large group of enemies approaching. This will help you clear the way and save your ammo and health. You can also use these traps to kill the boss or the rival crowd more easily.
- Watch the videos to get extra rewards (optional)
- Crowd Evolution also gives you the option to watch videos to get extra rewards such as coins, gems, weapons, items, etc. You can watch these videos after completing a level or when you see a special offer on the screen. Watching these videos will give you more resources that you can use to improve your crowd and gameplay. However, this is optional and not necessary if you download the Crowd Evolution APK mod, as you will already have unlimited money and gems.
- Conclusion
- Crowd Evolution is a fun and addictive game for Android users that lets you grow and evolve your crowd, equip them with various weapons and items, and defeat your enemies in exciting battles. You can also download the Crowd Evolution APK mod to get unlimited money, gems, and no ads. In this article, we have told you more about this game, its features, why you should download the mod, how to install it, and some tips and tricks to help you play better. We hope that you have enjoyed reading this article and that you will try out this game and have fun with it.
- FAQs
- Here are some frequently asked questions about Crowd Evolution:
- Q: Is Crowd Evolution a safe game to play?
-A: Yes, Crowd Evolution is a safe game to play. It does not contain any violence, gore, or inappropriate content that may be harmful or offensive to some players. It is suitable for all ages and audiences.
- Q: Is Crowd Evolution a multiplayer game?
-A: No, Crowd Evolution is not a multiplayer game. It is a single-player game that does not require an internet connection or a social media account to play. You can play it offline and by yourself.
- Q: How can I contact the developers of Crowd Evolution?
-A: You can contact the developers of Crowd Evolution by sending them an email at support@rollicgames.com or by visiting their website at https://www.rollicgames.com/. You can also follow them on Facebook, Twitter, Instagram, or YouTube for more updates and news about their games.
- Q: How can I get more coins and gems in Crowd Evolution?
-A: You can get more coins and gems in Crowd Evolution by completing levels, killing enemies, watching videos, or spending real money. However, if you want to get unlimited coins and gems without any effort or cost, you should download the Crowd Evolution APK mod from a trusted source.
- Q: What are some other games like Crowd Evolution?
-A: Some other games like Crowd Evolution are Crowd City, Join Clash 3D, Run Race 3D, Crowd Master 3D, and Crowd Simulator. These are some of the games that have similar gameplay and mechanics as Crowd Evolution, such as running, growing, fighting, and evolving your crowd. You can find these games on the Google Play Store or the App Store and try them out for yourself.
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Download Scary Teacher 3D Mod APK for Free and Enjoy Unlimited Money and Energy.md b/spaces/1phancelerku/anime-remove-background/Download Scary Teacher 3D Mod APK for Free and Enjoy Unlimited Money and Energy.md
deleted file mode 100644
index 253adef08918b94b59a459d28489ef4f034786d3..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Download Scary Teacher 3D Mod APK for Free and Enjoy Unlimited Money and Energy.md
+++ /dev/null
@@ -1,93 +0,0 @@
-
-Download Scary Teacher 3D Mod APK Unlimited Money and Energy
-Have you ever wanted to get revenge on your worst high school teacher? Do you enjoy playing pranks and solving mysteries? If so, you will love Scary Teacher 3D, a horror-themed adventure game where you can scare the creepy teacher by performing various activities and releasing pets under her custody. But what if you could make the game even more fun and exciting by having unlimited money and energy? That's where the mod APK comes in. In this article, we will tell you everything you need to know about downloading and installing Scary Teacher 3D mod APK unlimited money and energy on your Android device.
- Features of Scary Teacher 3D Mod APK
-Scary Teacher 3D is a popular game that has been downloaded over 100 million times on Google Play Store. It has many features that make it appealing to players of all ages, such as:
-download scary teacher 3d mod apk unlimited money and energy Download Zip ► https://jinyurl.com/2uNPvj
-
-Unlimited money and energy to prank the teacher. With the mod APK, you don't have to worry about running out of coins or stamina while playing. You can buy any item or upgrade you want, and perform as many pranks as you like without getting tired. This way, you can enjoy the game without any limitations or frustrations.
-Open world style interactive house with 15 rooms and mysteries. The game takes place in the scary teacher's house, which consists of 15 different rooms, each with its own unsolved mystery. You can explore the house freely and find clues, objects, and secrets that will help you complete your missions. You can also interact with various items and use them to your advantage.
-Horror themes but suitable for kids of all age. The game has a spooky atmosphere and sound effects that create a sense of tension and suspense. However, it is not too scary or violent for kids to play. The graphics are cartoonish and colorful, and the pranks are humorous and harmless. The game also has a rating of Teen on Google Play Store, which means it is suitable for players aged 13 and above.
-Easy controls and fun gameplay. The game has simple and intuitive controls that make it easy to play. You can move around using the joystick on the left side of the screen, and interact with items using the buttons on the right side. You can also swipe to change the camera angle and zoom in or out. The gameplay is fun and addictive, as you have to sneak around the house without getting caught by the teacher, and set up pranks that will make her scream or faint.
-
- How to Download and Install Scary Teacher 3D Mod APK
-If you want to download and install Scary Teacher 3D mod APK unlimited money and energy on your Android device, you need to follow these steps:
-
-Allow unknown apps on your Android device. Before you can install any app that is not from Google Play Store, you need to enable unknown sources on your device settings. To do this, go to Settings > Apps & Notifications > Special Access > Install Unknown Apps > Chrome (or whichever browser you use ) and toggle on the Allow from this source option. This will allow you to install apps from outside the official store.
-Download the mod APK file from a reputable source. Next, you need to find a reliable website that offers the mod APK file for Scary Teacher 3D. You can search for it on Google or use the link we provide below. Make sure you download the latest version of the mod APK that is compatible with your device and has no viruses or malware. The file size should be around 100 MB.
-Install the mod APK using a file manager app. After you download the mod APK file, you need to locate it on your device using a file manager app. You can use any app that can access your internal storage or SD card, such as Files by Google or ES File Explorer. Once you find the file, tap on it and follow the instructions to install it. You may need to grant some permissions to the app during the installation process.
-Enjoy the game with unlimited money and energy. Finally, you can launch the game from your app drawer or home screen and enjoy playing Scary Teacher 3D with unlimited money and energy. You can use the money to buy anything you want from the shop, such as costumes, weapons, or gadgets. You can also use the energy to perform as many pranks as you want without getting exhausted. Have fun scaring the teacher and discovering her secrets!
-
- Pros and Cons of Scary Teacher 3D Mod APK
-Downloading and installing Scary Teacher 3D mod APK unlimited money and energy has its advantages and disadvantages. Here are some of them:
-
-
-Pros
-Cons
-
-
-
-More fun, less frustration. You can enjoy the game without worrying about running out of money or energy, which can be annoying and frustrating. You can also skip the ads and the in-app purchases that may interrupt your gameplay.
-No ads, no in-app purchases. The mod APK removes all the ads and the in-app purchases that are present in the original game. This means you don't have to watch any videos or spend any real money to play the game.
-
-
-Potential security risks. Downloading and installing apps from unknown sources can expose your device to viruses, malware, or spyware that may harm your data or privacy. You should always be careful and use a trusted antivirus app to scan any file before installing it.
-Possible compatibility issues. The mod APK may not work with some devices or Android versions, depending on how it was modified. It may also crash or freeze during gameplay, causing you to lose your progress or data.
-May not work with future updates. The mod APK may not be compatible with future updates of the game, which may add new features, bug fixes, or improvements. You may have to wait for a new version of the mod APK or stick with the old one.
-
-
-
- Conclusion and FAQs
-In conclusion, Scary Teacher 3D is a fun and exciting game that lets you prank and scare your creepy high school teacher in her own house. You can make the game even more enjoyable by downloading and installing Scary Teacher 3D mod APK unlimited money and energy, which gives you access to everything you need to have a blast. However, you should also be aware of the potential risks and drawbacks of using a modded app, such as security issues, compatibility problems, or update conflicts. We hope this article has helped you learn more about Scary Teacher 3D mod APK unlimited money and energy and how to download and install it on your Android device.
- Here are some FAQs that may answer some of your questions:
- Q: Is Scary Teacher 3D mod APK safe to use?
-A: Generally speaking, yes, as long as you download it from a reputable source that has no viruses or malware. However, you should always be careful and use a trusted antivirus app to scan any file before installing it. You should also avoid granting any unnecessary permissions to the app during the installation process.
- Q: Is Scary Teacher 3D mod APK legal to use?
-A: That depends on where you live and what laws apply there. Some countries may have strict rules against modifying or distributing apps without permission from the developers or owners. Others may have more lenient regulations or none at all. You should always check your local laws before using any modded app.
- Q: Can I play Scary Teacher 3D mod APK online with other players? A: No, you cannot. Scary Teacher 3D mod APK is a single-player game that does not support online multiplayer mode. You can only play it offline on your own device. If you want to play online with other players, you need to download the original game from Google Play Store and use a stable internet connection.
-scary teacher 3d hack apk free download with unlimited coins and gems
-how to install scary teacher 3d mod apk on android device with infinite energy
-scary teacher 3d mod menu apk download latest version with unlimited money
-download scary teacher 3d mod apk for pc windows 10 with unlimited energy
-scary teacher 3d mod apk offline download no root with unlimited money and gems
-scary teacher 3d mod apk unlimited everything download for android with infinite energy
-download scary teacher 3d mod apk revdl with unlimited money and coins
-scary teacher 3d mod apk rexdl download free with unlimited energy and gems
-download scary teacher 3d mod apk happymod with unlimited money and coins
-scary teacher 3d mod apk android 1 download with infinite energy and gems
-download scary teacher 3d mod apk pure with unlimited money and coins
-scary teacher 3d mod apk obb download free with unlimited energy and gems
-download scary teacher 3d mod apk latest version with unlimited money and coins
-scary teacher 3d mod apk old version download with infinite energy and gems
-download scary teacher 3d mod apk new update with unlimited money and coins
-scary teacher 3d mod apk all chapters unlocked download with unlimited energy and gems
-download scary teacher 3d mod apk all levels unlocked with unlimited money and coins
-scary teacher 3d mod apk all characters unlocked download with infinite energy and gems
-download scary teacher 3d mod apk no ads with unlimited money and coins
-scary teacher 3d mod apk no verification download free with unlimited energy and gems
-download scary teacher 3d mod apk ios with unlimited money and coins
-scary teacher 3d mod ipa download for iphone with infinite energy and gems
-download scary teacher 3d mod apk for ipad with unlimited money and coins
-scary teacher 3d mod apk for mac download free with unlimited energy and gems
-download scary teacher 3d mod apk online with unlimited money and coins
-scary teacher 3d online multiplayer mod apk download with infinite energy and gems
-download scary teacher 3d zombie mode mod apk with unlimited money and coins
-scary teacher 3d christmas mode mod apk download free with unlimited energy and gems
-download scary teacher 3d halloween mode mod apk with unlimited money and coins
-scary teacher 3d summer mode mod apk download with infinite energy and gems
-download scary teacher 3d winter mode mod apk with unlimited money and coins
-scary teacher 3d valentine mode mod apk download free with unlimited energy and gems
-download scary teacher 3d easter mode mod apk with unlimited money and coins
-scary teacher 3d school mode mod apk download with infinite energy and gems
-download scary teacher 3d hospital mode mod apk with unlimited money and coins
-scary teacher 3d prison mode mod apk download free with unlimited energy and gems
-download scary teacher 3d mansion mode mod apk with unlimited money and coins
-scary teacher 3d garden mode mod apk download with infinite energy and gems
-download scary teacher 3d city mode mod apk with unlimited money and coins
-scary teacher 3d beach mode mod apk download free with unlimited energy and gems
- Q: Will I get banned from the game if I use Scary Teacher 3D mod APK?
-A: No, you will not. Scary Teacher 3D mod APK does not interfere with the game's servers or data, so there is no risk of getting banned or suspended from the game. You can play the game as normal without any worries.
- Q: Can I update Scary Teacher 3D mod APK to the latest version?
-A: Yes, you can, but only if there is a new version of the mod APK available that matches the latest version of the game. You cannot update the mod APK from the game itself or from Google Play Store, as that will overwrite the modded features and restore the original settings. You need to download and install the new version of the mod APK from the same source you got it from.
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/AIFILMS/audioldm-text-to-audio-generation/audioldm/latent_diffusion/openaimodel.py b/spaces/AIFILMS/audioldm-text-to-audio-generation/audioldm/latent_diffusion/openaimodel.py
deleted file mode 100644
index 831d7aafb36bba16888e4389153979a6c13639f5..0000000000000000000000000000000000000000
--- a/spaces/AIFILMS/audioldm-text-to-audio-generation/audioldm/latent_diffusion/openaimodel.py
+++ /dev/null
@@ -1,1069 +0,0 @@
-from abc import abstractmethod
-import math
-
-import numpy as np
-import torch as th
-import torch.nn as nn
-import torch.nn.functional as F
-
-from audioldm.latent_diffusion.util import (
- checkpoint,
- conv_nd,
- linear,
- avg_pool_nd,
- zero_module,
- normalization,
- timestep_embedding,
-)
-from audioldm.latent_diffusion.attention import SpatialTransformer
-
-
-# dummy replace
-def convert_module_to_f16(x):
- pass
-
-
-def convert_module_to_f32(x):
- pass
-
-
-## go
-class AttentionPool2d(nn.Module):
- """
- Adapted from CLIP: https://github.com/openai/CLIP/blob/main/clip/model.py
- """
-
- def __init__(
- self,
- spacial_dim: int,
- embed_dim: int,
- num_heads_channels: int,
- output_dim: int = None,
- ):
- super().__init__()
- self.positional_embedding = nn.Parameter(
- th.randn(embed_dim, spacial_dim**2 + 1) / embed_dim**0.5
- )
- self.qkv_proj = conv_nd(1, embed_dim, 3 * embed_dim, 1)
- self.c_proj = conv_nd(1, embed_dim, output_dim or embed_dim, 1)
- self.num_heads = embed_dim // num_heads_channels
- self.attention = QKVAttention(self.num_heads)
-
- def forward(self, x):
- b, c, *_spatial = x.shape
- x = x.reshape(b, c, -1).contiguous() # NC(HW)
- x = th.cat([x.mean(dim=-1, keepdim=True), x], dim=-1) # NC(HW+1)
- x = x + self.positional_embedding[None, :, :].to(x.dtype) # NC(HW+1)
- x = self.qkv_proj(x)
- x = self.attention(x)
- x = self.c_proj(x)
- return x[:, :, 0]
-
-
-class TimestepBlock(nn.Module):
- """
- Any module where forward() takes timestep embeddings as a second argument.
- """
-
- @abstractmethod
- def forward(self, x, emb):
- """
- Apply the module to `x` given `emb` timestep embeddings.
- """
-
-
-class TimestepEmbedSequential(nn.Sequential, TimestepBlock):
- """
- A sequential module that passes timestep embeddings to the children that
- support it as an extra input.
- """
-
- def forward(self, x, emb, context=None):
- for layer in self:
- if isinstance(layer, TimestepBlock):
- x = layer(x, emb)
- elif isinstance(layer, SpatialTransformer):
- x = layer(x, context)
- else:
- x = layer(x)
- return x
-
-
-class Upsample(nn.Module):
- """
- An upsampling layer with an optional convolution.
- :param channels: channels in the inputs and outputs.
- :param use_conv: a bool determining if a convolution is applied.
- :param dims: determines if the signal is 1D, 2D, or 3D. If 3D, then
- upsampling occurs in the inner-two dimensions.
- """
-
- def __init__(self, channels, use_conv, dims=2, out_channels=None, padding=1):
- super().__init__()
- self.channels = channels
- self.out_channels = out_channels or channels
- self.use_conv = use_conv
- self.dims = dims
- if use_conv:
- self.conv = conv_nd(
- dims, self.channels, self.out_channels, 3, padding=padding
- )
-
- def forward(self, x):
- assert x.shape[1] == self.channels
- if self.dims == 3:
- x = F.interpolate(
- x, (x.shape[2], x.shape[3] * 2, x.shape[4] * 2), mode="nearest"
- )
- else:
- x = F.interpolate(x, scale_factor=2, mode="nearest")
- if self.use_conv:
- x = self.conv(x)
- return x
-
-
-class TransposedUpsample(nn.Module):
- "Learned 2x upsampling without padding"
-
- def __init__(self, channels, out_channels=None, ks=5):
- super().__init__()
- self.channels = channels
- self.out_channels = out_channels or channels
-
- self.up = nn.ConvTranspose2d(
- self.channels, self.out_channels, kernel_size=ks, stride=2
- )
-
- def forward(self, x):
- return self.up(x)
-
-
-class Downsample(nn.Module):
- """
- A downsampling layer with an optional convolution.
- :param channels: channels in the inputs and outputs.
- :param use_conv: a bool determining if a convolution is applied.
- :param dims: determines if the signal is 1D, 2D, or 3D. If 3D, then
- downsampling occurs in the inner-two dimensions.
- """
-
- def __init__(self, channels, use_conv, dims=2, out_channels=None, padding=1):
- super().__init__()
- self.channels = channels
- self.out_channels = out_channels or channels
- self.use_conv = use_conv
- self.dims = dims
- stride = 2 if dims != 3 else (1, 2, 2)
- if use_conv:
- self.op = conv_nd(
- dims,
- self.channels,
- self.out_channels,
- 3,
- stride=stride,
- padding=padding,
- )
- else:
- assert self.channels == self.out_channels
- self.op = avg_pool_nd(dims, kernel_size=stride, stride=stride)
-
- def forward(self, x):
- assert x.shape[1] == self.channels
- return self.op(x)
-
-
-class ResBlock(TimestepBlock):
- """
- A residual block that can optionally change the number of channels.
- :param channels: the number of input channels.
- :param emb_channels: the number of timestep embedding channels.
- :param dropout: the rate of dropout.
- :param out_channels: if specified, the number of out channels.
- :param use_conv: if True and out_channels is specified, use a spatial
- convolution instead of a smaller 1x1 convolution to change the
- channels in the skip connection.
- :param dims: determines if the signal is 1D, 2D, or 3D.
- :param use_checkpoint: if True, use gradient checkpointing on this module.
- :param up: if True, use this block for upsampling.
- :param down: if True, use this block for downsampling.
- """
-
- def __init__(
- self,
- channels,
- emb_channels,
- dropout,
- out_channels=None,
- use_conv=False,
- use_scale_shift_norm=False,
- dims=2,
- use_checkpoint=False,
- up=False,
- down=False,
- ):
- super().__init__()
- self.channels = channels
- self.emb_channels = emb_channels
- self.dropout = dropout
- self.out_channels = out_channels or channels
- self.use_conv = use_conv
- self.use_checkpoint = use_checkpoint
- self.use_scale_shift_norm = use_scale_shift_norm
-
- self.in_layers = nn.Sequential(
- normalization(channels),
- nn.SiLU(),
- conv_nd(dims, channels, self.out_channels, 3, padding=1),
- )
-
- self.updown = up or down
-
- if up:
- self.h_upd = Upsample(channels, False, dims)
- self.x_upd = Upsample(channels, False, dims)
- elif down:
- self.h_upd = Downsample(channels, False, dims)
- self.x_upd = Downsample(channels, False, dims)
- else:
- self.h_upd = self.x_upd = nn.Identity()
-
- self.emb_layers = nn.Sequential(
- nn.SiLU(),
- linear(
- emb_channels,
- 2 * self.out_channels if use_scale_shift_norm else self.out_channels,
- ),
- )
- self.out_layers = nn.Sequential(
- normalization(self.out_channels),
- nn.SiLU(),
- nn.Dropout(p=dropout),
- zero_module(
- conv_nd(dims, self.out_channels, self.out_channels, 3, padding=1)
- ),
- )
-
- if self.out_channels == channels:
- self.skip_connection = nn.Identity()
- elif use_conv:
- self.skip_connection = conv_nd(
- dims, channels, self.out_channels, 3, padding=1
- )
- else:
- self.skip_connection = conv_nd(dims, channels, self.out_channels, 1)
-
- def forward(self, x, emb):
- """
- Apply the block to a Tensor, conditioned on a timestep embedding.
- :param x: an [N x C x ...] Tensor of features.
- :param emb: an [N x emb_channels] Tensor of timestep embeddings.
- :return: an [N x C x ...] Tensor of outputs.
- """
- return checkpoint(
- self._forward, (x, emb), self.parameters(), self.use_checkpoint
- )
-
- def _forward(self, x, emb):
- if self.updown:
- in_rest, in_conv = self.in_layers[:-1], self.in_layers[-1]
- h = in_rest(x)
- h = self.h_upd(h)
- x = self.x_upd(x)
- h = in_conv(h)
- else:
- h = self.in_layers(x)
- emb_out = self.emb_layers(emb).type(h.dtype)
- while len(emb_out.shape) < len(h.shape):
- emb_out = emb_out[..., None]
- if self.use_scale_shift_norm:
- out_norm, out_rest = self.out_layers[0], self.out_layers[1:]
- scale, shift = th.chunk(emb_out, 2, dim=1)
- h = out_norm(h) * (1 + scale) + shift
- h = out_rest(h)
- else:
- h = h + emb_out
- h = self.out_layers(h)
- return self.skip_connection(x) + h
-
-
-class AttentionBlock(nn.Module):
- """
- An attention block that allows spatial positions to attend to each other.
- Originally ported from here, but adapted to the N-d case.
- https://github.com/hojonathanho/diffusion/blob/1e0dceb3b3495bbe19116a5e1b3596cd0706c543/diffusion_tf/models/unet.py#L66.
- """
-
- def __init__(
- self,
- channels,
- num_heads=1,
- num_head_channels=-1,
- use_checkpoint=False,
- use_new_attention_order=False,
- ):
- super().__init__()
- self.channels = channels
- if num_head_channels == -1:
- self.num_heads = num_heads
- else:
- assert (
- channels % num_head_channels == 0
- ), f"q,k,v channels {channels} is not divisible by num_head_channels {num_head_channels}"
- self.num_heads = channels // num_head_channels
- self.use_checkpoint = use_checkpoint
- self.norm = normalization(channels)
- self.qkv = conv_nd(1, channels, channels * 3, 1)
- if use_new_attention_order:
- # split qkv before split heads
- self.attention = QKVAttention(self.num_heads)
- else:
- # split heads before split qkv
- self.attention = QKVAttentionLegacy(self.num_heads)
-
- self.proj_out = zero_module(conv_nd(1, channels, channels, 1))
-
- def forward(self, x):
- return checkpoint(
- self._forward, (x,), self.parameters(), True
- ) # TODO: check checkpoint usage, is True # TODO: fix the .half call!!!
- # return pt_checkpoint(self._forward, x) # pytorch
-
- def _forward(self, x):
- b, c, *spatial = x.shape
- x = x.reshape(b, c, -1).contiguous()
- qkv = self.qkv(self.norm(x)).contiguous()
- h = self.attention(qkv).contiguous()
- h = self.proj_out(h).contiguous()
- return (x + h).reshape(b, c, *spatial).contiguous()
-
-
-def count_flops_attn(model, _x, y):
- """
- A counter for the `thop` package to count the operations in an
- attention operation.
- Meant to be used like:
- macs, params = thop.profile(
- model,
- inputs=(inputs, timestamps),
- custom_ops={QKVAttention: QKVAttention.count_flops},
- )
- """
- b, c, *spatial = y[0].shape
- num_spatial = int(np.prod(spatial))
- # We perform two matmuls with the same number of ops.
- # The first computes the weight matrix, the second computes
- # the combination of the value vectors.
- matmul_ops = 2 * b * (num_spatial**2) * c
- model.total_ops += th.DoubleTensor([matmul_ops])
-
-
-class QKVAttentionLegacy(nn.Module):
- """
- A module which performs QKV attention. Matches legacy QKVAttention + input/ouput heads shaping
- """
-
- def __init__(self, n_heads):
- super().__init__()
- self.n_heads = n_heads
-
- def forward(self, qkv):
- """
- Apply QKV attention.
- :param qkv: an [N x (H * 3 * C) x T] tensor of Qs, Ks, and Vs.
- :return: an [N x (H * C) x T] tensor after attention.
- """
- bs, width, length = qkv.shape
- assert width % (3 * self.n_heads) == 0
- ch = width // (3 * self.n_heads)
- q, k, v = (
- qkv.reshape(bs * self.n_heads, ch * 3, length).contiguous().split(ch, dim=1)
- )
- scale = 1 / math.sqrt(math.sqrt(ch))
- weight = th.einsum(
- "bct,bcs->bts", q * scale, k * scale
- ) # More stable with f16 than dividing afterwards
- weight = th.softmax(weight.float(), dim=-1).type(weight.dtype)
- a = th.einsum("bts,bcs->bct", weight, v)
- return a.reshape(bs, -1, length).contiguous()
-
- @staticmethod
- def count_flops(model, _x, y):
- return count_flops_attn(model, _x, y)
-
-
-class QKVAttention(nn.Module):
- """
- A module which performs QKV attention and splits in a different order.
- """
-
- def __init__(self, n_heads):
- super().__init__()
- self.n_heads = n_heads
-
- def forward(self, qkv):
- """
- Apply QKV attention.
- :param qkv: an [N x (3 * H * C) x T] tensor of Qs, Ks, and Vs.
- :return: an [N x (H * C) x T] tensor after attention.
- """
- bs, width, length = qkv.shape
- assert width % (3 * self.n_heads) == 0
- ch = width // (3 * self.n_heads)
- q, k, v = qkv.chunk(3, dim=1)
- scale = 1 / math.sqrt(math.sqrt(ch))
- weight = th.einsum(
- "bct,bcs->bts",
- (q * scale).view(bs * self.n_heads, ch, length),
- (k * scale).view(bs * self.n_heads, ch, length),
- ) # More stable with f16 than dividing afterwards
- weight = th.softmax(weight.float(), dim=-1).type(weight.dtype)
- a = th.einsum(
- "bts,bcs->bct",
- weight,
- v.reshape(bs * self.n_heads, ch, length).contiguous(),
- )
- return a.reshape(bs, -1, length).contiguous()
-
- @staticmethod
- def count_flops(model, _x, y):
- return count_flops_attn(model, _x, y)
-
-
-class UNetModel(nn.Module):
- """
- The full UNet model with attention and timestep embedding.
- :param in_channels: channels in the input Tensor.
- :param model_channels: base channel count for the model.
- :param out_channels: channels in the output Tensor.
- :param num_res_blocks: number of residual blocks per downsample.
- :param attention_resolutions: a collection of downsample rates at which
- attention will take place. May be a set, list, or tuple.
- For example, if this contains 4, then at 4x downsampling, attention
- will be used.
- :param dropout: the dropout probability.
- :param channel_mult: channel multiplier for each level of the UNet.
- :param conv_resample: if True, use learned convolutions for upsampling and
- downsampling.
- :param dims: determines if the signal is 1D, 2D, or 3D.
- :param num_classes: if specified (as an int), then this model will be
- class-conditional with `num_classes` classes.
- :param use_checkpoint: use gradient checkpointing to reduce memory usage.
- :param num_heads: the number of attention heads in each attention layer.
- :param num_heads_channels: if specified, ignore num_heads and instead use
- a fixed channel width per attention head.
- :param num_heads_upsample: works with num_heads to set a different number
- of heads for upsampling. Deprecated.
- :param use_scale_shift_norm: use a FiLM-like conditioning mechanism.
- :param resblock_updown: use residual blocks for up/downsampling.
- :param use_new_attention_order: use a different attention pattern for potentially
- increased efficiency.
- """
-
- def __init__(
- self,
- image_size,
- in_channels,
- model_channels,
- out_channels,
- num_res_blocks,
- attention_resolutions,
- dropout=0,
- channel_mult=(1, 2, 4, 8),
- conv_resample=True,
- dims=2,
- num_classes=None,
- extra_film_condition_dim=None,
- use_checkpoint=False,
- use_fp16=False,
- num_heads=-1,
- num_head_channels=-1,
- num_heads_upsample=-1,
- use_scale_shift_norm=False,
- extra_film_use_concat=False, # If true, concatenate extrafilm condition with time embedding, else addition
- resblock_updown=False,
- use_new_attention_order=False,
- use_spatial_transformer=False, # custom transformer support
- transformer_depth=1, # custom transformer support
- context_dim=None, # custom transformer support
- n_embed=None, # custom support for prediction of discrete ids into codebook of first stage vq model
- legacy=True,
- ):
- super().__init__()
- if num_heads_upsample == -1:
- num_heads_upsample = num_heads
-
- if num_heads == -1:
- assert (
- num_head_channels != -1
- ), "Either num_heads or num_head_channels has to be set"
-
- if num_head_channels == -1:
- assert (
- num_heads != -1
- ), "Either num_heads or num_head_channels has to be set"
-
- self.image_size = image_size
- self.in_channels = in_channels
- self.model_channels = model_channels
- self.out_channels = out_channels
- self.num_res_blocks = num_res_blocks
- self.attention_resolutions = attention_resolutions
- self.dropout = dropout
- self.channel_mult = channel_mult
- self.conv_resample = conv_resample
- self.num_classes = num_classes
- self.extra_film_condition_dim = extra_film_condition_dim
- self.use_checkpoint = use_checkpoint
- self.dtype = th.float16 if use_fp16 else th.float32
- self.num_heads = num_heads
- self.num_head_channels = num_head_channels
- self.num_heads_upsample = num_heads_upsample
- self.predict_codebook_ids = n_embed is not None
- self.extra_film_use_concat = extra_film_use_concat
- time_embed_dim = model_channels * 4
- self.time_embed = nn.Sequential(
- linear(model_channels, time_embed_dim),
- nn.SiLU(),
- linear(time_embed_dim, time_embed_dim),
- )
-
- assert not (
- self.num_classes is not None and self.extra_film_condition_dim is not None
- ), "As for the condition of theh UNet model, you can only set using class label or an extra embedding vector (such as from CLAP). You cannot set both num_classes and extra_film_condition_dim."
-
- if self.num_classes is not None:
- self.label_emb = nn.Embedding(num_classes, time_embed_dim)
-
- self.use_extra_film_by_concat = (
- self.extra_film_condition_dim is not None and self.extra_film_use_concat
- )
- self.use_extra_film_by_addition = (
- self.extra_film_condition_dim is not None and not self.extra_film_use_concat
- )
-
- if self.extra_film_condition_dim is not None:
- self.film_emb = nn.Linear(self.extra_film_condition_dim, time_embed_dim)
- # print("+ Use extra condition on UNet channel using Film. Extra condition dimension is %s. " % self.extra_film_condition_dim)
- # if(self.use_extra_film_by_concat):
- # print("\t By concatenation with time embedding")
- # elif(self.use_extra_film_by_concat):
- # print("\t By addition with time embedding")
-
- if use_spatial_transformer and (
- self.use_extra_film_by_concat or self.use_extra_film_by_addition
- ):
- # print("+ Spatial transformer will only be used as self-attention. Because you have choose to use film as your global condition.")
- spatial_transformer_no_context = True
- else:
- spatial_transformer_no_context = False
-
- if use_spatial_transformer and not spatial_transformer_no_context:
- assert (
- context_dim is not None
- ), "Fool!! You forgot to include the dimension of your cross-attention conditioning..."
-
- if context_dim is not None and not spatial_transformer_no_context:
- assert (
- use_spatial_transformer
- ), "Fool!! You forgot to use the spatial transformer for your cross-attention conditioning..."
- from omegaconf.listconfig import ListConfig
-
- if type(context_dim) == ListConfig:
- context_dim = list(context_dim)
-
- self.input_blocks = nn.ModuleList(
- [
- TimestepEmbedSequential(
- conv_nd(dims, in_channels, model_channels, 3, padding=1)
- )
- ]
- )
- self._feature_size = model_channels
- input_block_chans = [model_channels]
- ch = model_channels
- ds = 1
- for level, mult in enumerate(channel_mult):
- for _ in range(num_res_blocks):
- layers = [
- ResBlock(
- ch,
- time_embed_dim
- if (not self.use_extra_film_by_concat)
- else time_embed_dim * 2,
- dropout,
- out_channels=mult * model_channels,
- dims=dims,
- use_checkpoint=use_checkpoint,
- use_scale_shift_norm=use_scale_shift_norm,
- )
- ]
- ch = mult * model_channels
- if ds in attention_resolutions:
- if num_head_channels == -1:
- dim_head = ch // num_heads
- else:
- num_heads = ch // num_head_channels
- dim_head = num_head_channels
- if legacy:
- dim_head = (
- ch // num_heads
- if use_spatial_transformer
- else num_head_channels
- )
- layers.append(
- AttentionBlock(
- ch,
- use_checkpoint=use_checkpoint,
- num_heads=num_heads,
- num_head_channels=dim_head,
- use_new_attention_order=use_new_attention_order,
- )
- if not use_spatial_transformer
- else SpatialTransformer(
- ch,
- num_heads,
- dim_head,
- depth=transformer_depth,
- context_dim=context_dim,
- no_context=spatial_transformer_no_context,
- )
- )
- self.input_blocks.append(TimestepEmbedSequential(*layers))
- self._feature_size += ch
- input_block_chans.append(ch)
- if level != len(channel_mult) - 1:
- out_ch = ch
- self.input_blocks.append(
- TimestepEmbedSequential(
- ResBlock(
- ch,
- time_embed_dim
- if (not self.use_extra_film_by_concat)
- else time_embed_dim * 2,
- dropout,
- out_channels=out_ch,
- dims=dims,
- use_checkpoint=use_checkpoint,
- use_scale_shift_norm=use_scale_shift_norm,
- down=True,
- )
- if resblock_updown
- else Downsample(
- ch, conv_resample, dims=dims, out_channels=out_ch
- )
- )
- )
- ch = out_ch
- input_block_chans.append(ch)
- ds *= 2
- self._feature_size += ch
-
- if num_head_channels == -1:
- dim_head = ch // num_heads
- else:
- num_heads = ch // num_head_channels
- dim_head = num_head_channels
- if legacy:
- # num_heads = 1
- dim_head = ch // num_heads if use_spatial_transformer else num_head_channels
- self.middle_block = TimestepEmbedSequential(
- ResBlock(
- ch,
- time_embed_dim
- if (not self.use_extra_film_by_concat)
- else time_embed_dim * 2,
- dropout,
- dims=dims,
- use_checkpoint=use_checkpoint,
- use_scale_shift_norm=use_scale_shift_norm,
- ),
- AttentionBlock(
- ch,
- use_checkpoint=use_checkpoint,
- num_heads=num_heads,
- num_head_channels=dim_head,
- use_new_attention_order=use_new_attention_order,
- )
- if not use_spatial_transformer
- else SpatialTransformer(
- ch,
- num_heads,
- dim_head,
- depth=transformer_depth,
- context_dim=context_dim,
- no_context=spatial_transformer_no_context,
- ),
- ResBlock(
- ch,
- time_embed_dim
- if (not self.use_extra_film_by_concat)
- else time_embed_dim * 2,
- dropout,
- dims=dims,
- use_checkpoint=use_checkpoint,
- use_scale_shift_norm=use_scale_shift_norm,
- ),
- )
- self._feature_size += ch
-
- self.output_blocks = nn.ModuleList([])
- for level, mult in list(enumerate(channel_mult))[::-1]:
- for i in range(num_res_blocks + 1):
- ich = input_block_chans.pop()
- layers = [
- ResBlock(
- ch + ich,
- time_embed_dim
- if (not self.use_extra_film_by_concat)
- else time_embed_dim * 2,
- dropout,
- out_channels=model_channels * mult,
- dims=dims,
- use_checkpoint=use_checkpoint,
- use_scale_shift_norm=use_scale_shift_norm,
- )
- ]
- ch = model_channels * mult
- if ds in attention_resolutions:
- if num_head_channels == -1:
- dim_head = ch // num_heads
- else:
- num_heads = ch // num_head_channels
- dim_head = num_head_channels
- if legacy:
- # num_heads = 1
- dim_head = (
- ch // num_heads
- if use_spatial_transformer
- else num_head_channels
- )
- layers.append(
- AttentionBlock(
- ch,
- use_checkpoint=use_checkpoint,
- num_heads=num_heads_upsample,
- num_head_channels=dim_head,
- use_new_attention_order=use_new_attention_order,
- )
- if not use_spatial_transformer
- else SpatialTransformer(
- ch,
- num_heads,
- dim_head,
- depth=transformer_depth,
- context_dim=context_dim,
- no_context=spatial_transformer_no_context,
- )
- )
- if level and i == num_res_blocks:
- out_ch = ch
- layers.append(
- ResBlock(
- ch,
- time_embed_dim
- if (not self.use_extra_film_by_concat)
- else time_embed_dim * 2,
- dropout,
- out_channels=out_ch,
- dims=dims,
- use_checkpoint=use_checkpoint,
- use_scale_shift_norm=use_scale_shift_norm,
- up=True,
- )
- if resblock_updown
- else Upsample(ch, conv_resample, dims=dims, out_channels=out_ch)
- )
- ds //= 2
- self.output_blocks.append(TimestepEmbedSequential(*layers))
- self._feature_size += ch
-
- self.out = nn.Sequential(
- normalization(ch),
- nn.SiLU(),
- zero_module(conv_nd(dims, model_channels, out_channels, 3, padding=1)),
- )
- if self.predict_codebook_ids:
- self.id_predictor = nn.Sequential(
- normalization(ch),
- conv_nd(dims, model_channels, n_embed, 1),
- # nn.LogSoftmax(dim=1) # change to cross_entropy and produce non-normalized logits
- )
-
- self.shape_reported = False
-
- def convert_to_fp16(self):
- """
- Convert the torso of the model to float16.
- """
- self.input_blocks.apply(convert_module_to_f16)
- self.middle_block.apply(convert_module_to_f16)
- self.output_blocks.apply(convert_module_to_f16)
-
- def convert_to_fp32(self):
- """
- Convert the torso of the model to float32.
- """
- self.input_blocks.apply(convert_module_to_f32)
- self.middle_block.apply(convert_module_to_f32)
- self.output_blocks.apply(convert_module_to_f32)
-
- def forward(self, x, timesteps=None, context=None, y=None, **kwargs):
- """
- Apply the model to an input batch.
- :param x: an [N x C x ...] Tensor of inputs.
- :param timesteps: a 1-D batch of timesteps.
- :param context: conditioning plugged in via crossattn
- :param y: an [N] Tensor of labels, if class-conditional. an [N, extra_film_condition_dim] Tensor if film-embed conditional
- :return: an [N x C x ...] Tensor of outputs.
- """
- if not self.shape_reported:
- # print("The shape of UNet input is", x.size())
- self.shape_reported = True
-
- assert (y is not None) == (
- self.num_classes is not None or self.extra_film_condition_dim is not None
- ), "must specify y if and only if the model is class-conditional or film embedding conditional"
- hs = []
- t_emb = timestep_embedding(timesteps, self.model_channels, repeat_only=False)
- emb = self.time_embed(t_emb)
-
- if self.num_classes is not None:
- assert y.shape == (x.shape[0],)
- emb = emb + self.label_emb(y)
-
- if self.use_extra_film_by_addition:
- emb = emb + self.film_emb(y)
- elif self.use_extra_film_by_concat:
- emb = th.cat([emb, self.film_emb(y)], dim=-1)
-
- h = x.type(self.dtype)
- for module in self.input_blocks:
- h = module(h, emb, context)
- hs.append(h)
- h = self.middle_block(h, emb, context)
- for module in self.output_blocks:
- h = th.cat([h, hs.pop()], dim=1)
- h = module(h, emb, context)
- h = h.type(x.dtype)
- if self.predict_codebook_ids:
- return self.id_predictor(h)
- else:
- return self.out(h)
-
-
-class EncoderUNetModel(nn.Module):
- """
- The half UNet model with attention and timestep embedding.
- For usage, see UNet.
- """
-
- def __init__(
- self,
- image_size,
- in_channels,
- model_channels,
- out_channels,
- num_res_blocks,
- attention_resolutions,
- dropout=0,
- channel_mult=(1, 2, 4, 8),
- conv_resample=True,
- dims=2,
- use_checkpoint=False,
- use_fp16=False,
- num_heads=1,
- num_head_channels=-1,
- num_heads_upsample=-1,
- use_scale_shift_norm=False,
- resblock_updown=False,
- use_new_attention_order=False,
- pool="adaptive",
- *args,
- **kwargs,
- ):
- super().__init__()
-
- if num_heads_upsample == -1:
- num_heads_upsample = num_heads
-
- self.in_channels = in_channels
- self.model_channels = model_channels
- self.out_channels = out_channels
- self.num_res_blocks = num_res_blocks
- self.attention_resolutions = attention_resolutions
- self.dropout = dropout
- self.channel_mult = channel_mult
- self.conv_resample = conv_resample
- self.use_checkpoint = use_checkpoint
- self.dtype = th.float16 if use_fp16 else th.float32
- self.num_heads = num_heads
- self.num_head_channels = num_head_channels
- self.num_heads_upsample = num_heads_upsample
-
- time_embed_dim = model_channels * 4
- self.time_embed = nn.Sequential(
- linear(model_channels, time_embed_dim),
- nn.SiLU(),
- linear(time_embed_dim, time_embed_dim),
- )
-
- self.input_blocks = nn.ModuleList(
- [
- TimestepEmbedSequential(
- conv_nd(dims, in_channels, model_channels, 3, padding=1)
- )
- ]
- )
- self._feature_size = model_channels
- input_block_chans = [model_channels]
- ch = model_channels
- ds = 1
- for level, mult in enumerate(channel_mult):
- for _ in range(num_res_blocks):
- layers = [
- ResBlock(
- ch,
- time_embed_dim,
- dropout,
- out_channels=mult * model_channels,
- dims=dims,
- use_checkpoint=use_checkpoint,
- use_scale_shift_norm=use_scale_shift_norm,
- )
- ]
- ch = mult * model_channels
- if ds in attention_resolutions:
- layers.append(
- AttentionBlock(
- ch,
- use_checkpoint=use_checkpoint,
- num_heads=num_heads,
- num_head_channels=num_head_channels,
- use_new_attention_order=use_new_attention_order,
- )
- )
- self.input_blocks.append(TimestepEmbedSequential(*layers))
- self._feature_size += ch
- input_block_chans.append(ch)
- if level != len(channel_mult) - 1:
- out_ch = ch
- self.input_blocks.append(
- TimestepEmbedSequential(
- ResBlock(
- ch,
- time_embed_dim,
- dropout,
- out_channels=out_ch,
- dims=dims,
- use_checkpoint=use_checkpoint,
- use_scale_shift_norm=use_scale_shift_norm,
- down=True,
- )
- if resblock_updown
- else Downsample(
- ch, conv_resample, dims=dims, out_channels=out_ch
- )
- )
- )
- ch = out_ch
- input_block_chans.append(ch)
- ds *= 2
- self._feature_size += ch
-
- self.middle_block = TimestepEmbedSequential(
- ResBlock(
- ch,
- time_embed_dim,
- dropout,
- dims=dims,
- use_checkpoint=use_checkpoint,
- use_scale_shift_norm=use_scale_shift_norm,
- ),
- AttentionBlock(
- ch,
- use_checkpoint=use_checkpoint,
- num_heads=num_heads,
- num_head_channels=num_head_channels,
- use_new_attention_order=use_new_attention_order,
- ),
- ResBlock(
- ch,
- time_embed_dim,
- dropout,
- dims=dims,
- use_checkpoint=use_checkpoint,
- use_scale_shift_norm=use_scale_shift_norm,
- ),
- )
- self._feature_size += ch
- self.pool = pool
- if pool == "adaptive":
- self.out = nn.Sequential(
- normalization(ch),
- nn.SiLU(),
- nn.AdaptiveAvgPool2d((1, 1)),
- zero_module(conv_nd(dims, ch, out_channels, 1)),
- nn.Flatten(),
- )
- elif pool == "attention":
- assert num_head_channels != -1
- self.out = nn.Sequential(
- normalization(ch),
- nn.SiLU(),
- AttentionPool2d(
- (image_size // ds), ch, num_head_channels, out_channels
- ),
- )
- elif pool == "spatial":
- self.out = nn.Sequential(
- nn.Linear(self._feature_size, 2048),
- nn.ReLU(),
- nn.Linear(2048, self.out_channels),
- )
- elif pool == "spatial_v2":
- self.out = nn.Sequential(
- nn.Linear(self._feature_size, 2048),
- normalization(2048),
- nn.SiLU(),
- nn.Linear(2048, self.out_channels),
- )
- else:
- raise NotImplementedError(f"Unexpected {pool} pooling")
-
- def convert_to_fp16(self):
- """
- Convert the torso of the model to float16.
- """
- self.input_blocks.apply(convert_module_to_f16)
- self.middle_block.apply(convert_module_to_f16)
-
- def convert_to_fp32(self):
- """
- Convert the torso of the model to float32.
- """
- self.input_blocks.apply(convert_module_to_f32)
- self.middle_block.apply(convert_module_to_f32)
-
- def forward(self, x, timesteps):
- """
- Apply the model to an input batch.
- :param x: an [N x C x ...] Tensor of inputs.
- :param timesteps: a 1-D batch of timesteps.
- :return: an [N x K] Tensor of outputs.
- """
- emb = self.time_embed(timestep_embedding(timesteps, self.model_channels))
-
- results = []
- h = x.type(self.dtype)
- for module in self.input_blocks:
- h = module(h, emb)
- if self.pool.startswith("spatial"):
- results.append(h.type(x.dtype).mean(dim=(2, 3)))
- h = self.middle_block(h, emb)
- if self.pool.startswith("spatial"):
- results.append(h.type(x.dtype).mean(dim=(2, 3)))
- h = th.cat(results, axis=-1)
- return self.out(h)
- else:
- h = h.type(x.dtype)
- return self.out(h)
diff --git a/spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/ldm/modules/encoders/open_clap/loss.py b/spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/ldm/modules/encoders/open_clap/loss.py
deleted file mode 100644
index 53bbedd959813b072b146c16c14cd96df6cada14..0000000000000000000000000000000000000000
--- a/spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/ldm/modules/encoders/open_clap/loss.py
+++ /dev/null
@@ -1,307 +0,0 @@
-from multiprocessing.sharedctypes import Value
-import torch
-import torch.distributed.nn
-from torch import distributed as dist, nn as nn
-from torch.nn import functional as F
-import numpy as np
-from sklearn.metrics import average_precision_score, roc_auc_score, accuracy_score
-
-try:
- import horovod.torch as hvd
-except ImportError:
- hvd = None
-
-
-def gather_features(
- audio_features,
- text_features,
- audio_features_mlp=None,
- text_features_mlp=None,
- local_loss=False,
- gather_with_grad=False,
- rank=0,
- world_size=1,
- use_horovod=False,
- mlp_loss=False
-):
- if use_horovod:
- assert hvd is not None, 'Please install horovod'
- if gather_with_grad:
- all_audio_features = hvd.allgather(audio_features)
- all_text_features = hvd.allgather(text_features)
- if mlp_loss:
- all_audio_features_mlp = hvd.allgather(audio_features_mlp)
- all_text_features_mlp = hvd.allgather(text_features_mlp)
- else:
- with torch.no_grad():
- all_audio_features = hvd.allgather(audio_features)
- all_text_features = hvd.allgather(text_features)
- if mlp_loss:
- all_audio_features_mlp = hvd.allgather(audio_features_mlp)
- all_text_features_mlp = hvd.allgather(text_features_mlp)
- if not local_loss:
- # ensure grads for local rank when all_* features don't have a gradient
- gathered_audio_features = list(all_audio_features.chunk(world_size, dim=0))
- gathered_text_features = list(all_text_features.chunk(world_size, dim=0))
- gathered_audio_features[rank] = audio_features
- gathered_text_features[rank] = text_features
- all_audio_features = torch.cat(gathered_audio_features, dim=0)
- all_text_features = torch.cat(gathered_text_features, dim=0)
- if mlp_loss:
- gathered_audio_features_mlp = list(all_audio_features_mlp.chunk(world_size, dim=0))
- gathered_text_features_mlp = list(all_text_features_mlp.chunk(world_size, dim=0))
- gathered_audio_features_mlp[rank] = audio_features_mlp
- gathered_text_features_mlp[rank] = text_features_mlp
- all_audio_features_mlp = torch.cat(gathered_audio_features_mlp, dim=0)
- all_text_features_mlp = torch.cat(gathered_text_features_mlp, dim=0)
- else:
- # We gather tensors from all gpus
- if gather_with_grad:
- all_audio_features = torch.cat(torch.distributed.nn.all_gather(audio_features), dim=0)
- all_text_features = torch.cat(torch.distributed.nn.all_gather(text_features), dim=0)
- if mlp_loss:
- all_audio_features_mlp = torch.cat(torch.distributed.nn.all_gather(audio_features_mlp), dim=0)
- all_text_features_mlp = torch.cat(torch.distributed.nn.all_gather(text_features_mlp), dim=0)
- else:
- gathered_audio_features = [torch.zeros_like(audio_features) for _ in range(world_size)]
- gathered_text_features = [torch.zeros_like(text_features) for _ in range(world_size)]
- dist.all_gather(gathered_audio_features, audio_features)
- dist.all_gather(gathered_text_features, text_features)
- if mlp_loss:
- gathered_audio_features_mlp = [torch.zeros_like(audio_features_mlp) for _ in range(world_size)]
- gathered_text_features_mlp = [torch.zeros_like(text_features_mlp) for _ in range(world_size)]
- dist.all_gather(gathered_audio_features_mlp, audio_features_mlp)
- dist.all_gather(gathered_text_features_mlp, text_features_mlp)
- if not local_loss:
- # ensure grads for local rank when all_* features don't have a gradient
- gathered_audio_features[rank] = audio_features
- gathered_text_features[rank] = text_features
- if mlp_loss:
- gathered_audio_features_mlp[rank] = audio_features_mlp
- gathered_text_features_mlp[rank] = text_features_mlp
-
- all_audio_features = torch.cat(gathered_audio_features, dim=0)
- all_text_features = torch.cat(gathered_text_features, dim=0)
- if mlp_loss:
- all_audio_features_mlp = torch.cat(gathered_audio_features_mlp, dim=0)
- all_text_features_mlp = torch.cat(gathered_text_features_mlp, dim=0)
- if mlp_loss:
- return all_audio_features, all_text_features, all_audio_features_mlp, all_text_features_mlp
- else:
- return all_audio_features, all_text_features
-
-class ClipLoss(nn.Module):
-
- def __init__(
- self,
- local_loss=False,
- gather_with_grad=False,
- cache_labels=False,
- rank=0,
- world_size=1,
- use_horovod=False,
- mlp_loss=False,
- weight_loss_kappa=0,
- ):
- super().__init__()
- self.local_loss = local_loss
- self.gather_with_grad = gather_with_grad
- self.cache_labels = cache_labels
- self.rank = rank
- self.world_size = world_size
- self.use_horovod = use_horovod
- self.mlp_loss = mlp_loss
- self.weighted_loss = bool(weight_loss_kappa!=0)
- self.weight_loss_kappa = weight_loss_kappa
- # cache state
- self.prev_num_logits = 0
- self.labels = {}
-
- def forward(self, audio_features, text_features, logit_scale_a, logit_scale_t=None, audio_features_mlp=None, text_features_mlp=None):
- device = audio_features.device
- if self.mlp_loss:
- if self.world_size > 1:
- all_audio_features, all_text_features, all_audio_features_mlp, all_text_features_mlp = gather_features(
- audio_features=audio_features,text_features=text_features,
- audio_features_mlp=audio_features_mlp,text_features_mlp=text_features_mlp,
- local_loss=self.local_loss,gather_with_grad=self.gather_with_grad,
- rank=self.rank,world_size=self.world_size,use_horovod=self.use_horovod,
- mlp_loss=self.mlp_loss
- )
- if self.local_loss:
- a_logits_per_audio = logit_scale_a * audio_features @ all_text_features_mlp.T
- a_logits_per_text = logit_scale_a * text_features_mlp @ all_audio_features.T
- t_logits_per_audio = logit_scale_t * audio_features_mlp @ all_text_features.T
- t_logits_per_text = logit_scale_t * text_features @ all_audio_features_mlp.T
- else:
- a_logits_per_audio = logit_scale_a * all_audio_features @ all_text_features_mlp.T
- a_logits_per_text = a_logits_per_audio.T
- t_logits_per_audio = logit_scale_t * all_audio_features_mlp @ all_text_features.T
- t_logits_per_text = t_logits_per_audio.T
- else:
- a_logits_per_audio = logit_scale_a * audio_features @ text_features_mlp.T
- a_logits_per_text = logit_scale_a * text_features_mlp @ audio_features.T
- t_logits_per_audio = logit_scale_t * audio_features_mlp @ text_features.T
- t_logits_per_text = logit_scale_t * text_features @ audio_features_mlp.T
-
- # calculated ground-truth and cache if enabled
- num_logits = a_logits_per_audio.shape[0]
- if self.prev_num_logits != num_logits or device not in self.labels:
- labels = torch.arange(num_logits, device=device, dtype=torch.long)
- if self.world_size > 1 and self.local_loss:
- labels = labels + num_logits * self.rank
- if self.cache_labels:
- self.labels[device] = labels
- self.prev_num_logits = num_logits
- else:
- labels = self.labels[device]
-
- if not self.weighted_loss:
- total_loss = (
- F.cross_entropy(a_logits_per_audio, labels) +
- F.cross_entropy(a_logits_per_text, labels) +
- F.cross_entropy(t_logits_per_audio, labels) +
- F.cross_entropy(t_logits_per_text, labels)
- ) / 4
- else:
- audio_weight = (audio_features@audio_features.T).detach()
- audio_weight = (torch.exp(torch.sum(audio_weight, axis=1)/(self.weight_loss_kappa*len(audio_weight)))).detach()
- text_weight = (text_features@text_features.T).detach()
- text_weight = (torch.exp(torch.sum(text_weight, axis=1)/(self.weight_loss_kappa*len(text_features)))).detach()
- total_loss = (
- F.cross_entropy(a_logits_per_audio, labels, weight=audio_weight) +
- F.cross_entropy(a_logits_per_text, labels, weight=audio_weight) +
- F.cross_entropy(t_logits_per_audio, labels, weight=text_weight) +
- F.cross_entropy(t_logits_per_text, labels, weight=text_weight)
- ) / 4
- else:
- if self.world_size > 1:
- all_audio_features, all_text_features = gather_features(
- audio_features=audio_features,text_features=text_features,
- local_loss=self.local_loss,gather_with_grad=self.gather_with_grad,
- rank=self.rank,world_size=self.world_size,use_horovod=self.use_horovod,
- mlp_loss=self.mlp_loss
- )
-
- if self.local_loss:
- logits_per_audio = logit_scale_a * audio_features @ all_text_features.T
- logits_per_text = logit_scale_a * text_features @ all_audio_features.T
- else:
- logits_per_audio = logit_scale_a * all_audio_features @ all_text_features.T
- logits_per_text = logits_per_audio.T
- else:
- logits_per_audio = logit_scale_a * audio_features @ text_features.T
- logits_per_text = logit_scale_a * text_features @ audio_features.T
-
- # calculated ground-truth and cache if enabled
- num_logits = logits_per_audio.shape[0]
- if self.prev_num_logits != num_logits or device not in self.labels:
- labels = torch.arange(num_logits, device=device, dtype=torch.long)
- if self.world_size > 1 and self.local_loss:
- labels = labels + num_logits * self.rank
- if self.cache_labels:
- self.labels[device] = labels
- self.prev_num_logits = num_logits
- else:
- labels = self.labels[device]
- if not self.weighted_loss:
- total_loss = (
- F.cross_entropy(logits_per_audio, labels) +
- F.cross_entropy(logits_per_text, labels)
- ) / 2
- else:
- audio_weight = (all_audio_features@all_audio_features.T).detach()
- audio_weight = (torch.exp(torch.sum(audio_weight, axis=1)/(self.weight_loss_kappa*len(all_audio_features)))).detach()
- text_weight = (all_text_features@all_text_features.T).detach()
- text_weight = (torch.exp(torch.sum(text_weight, axis=1)/(self.weight_loss_kappa*len(all_text_features)))).detach()
- total_loss = (
- F.cross_entropy(logits_per_audio, labels, weight=text_weight) +
- F.cross_entropy(logits_per_text, labels, weight=audio_weight)
- ) / 2
- return total_loss
-
-def lp_gather_features(
- pred,
- target,
- world_size=1,
- use_horovod=False
-):
- if use_horovod:
- assert hvd is not None, 'Please install horovod'
- with torch.no_grad():
- all_preds = hvd.allgather(pred)
- all_targets = hvd.allgath(target)
- else:
- gathered_preds = [torch.zeros_like(pred) for _ in range(world_size)]
- gathered_targets = [torch.zeros_like(target) for _ in range(world_size)]
-
- dist.all_gather(gathered_preds, pred)
- dist.all_gather(gathered_targets, target)
- all_preds = torch.cat(gathered_preds, dim=0)
- all_targets = torch.cat(gathered_targets, dim=0)
-
- return all_preds, all_targets
-
-
-def get_map(pred, target):
- pred = torch.sigmoid(pred).numpy()
- target = target.numpy()
- return np.mean(average_precision_score(target, pred, average=None))
-
-def get_acc(pred, target):
- pred = torch.argmax(pred,1).numpy()
- target = torch.argmax(target,1).numpy()
- return accuracy_score(target, pred)
-
-def get_mauc(pred, target):
- pred = torch.sigmoid(pred).numpy()
- target = target.numpy()
- return np.mean(roc_auc_score(target, pred, average=None))
-
-
-class LPMetrics(object):
- def __init__(self, metric_names = ['map','acc','mauc']):
- self.metrics = []
- for name in metric_names:
- self.metrics.append(self.get_metric(name))
- self.metric_names = metric_names
-
- def get_metric(self,name):
- if name == 'map':
- return get_map
- elif name == 'acc':
- return get_acc
- elif name == 'mauc':
- return get_mauc
- else:
- raise ValueError(f'the metric should be at least one of [map, acc, mauc]')
-
- def evaluate_mertics(self, pred, target):
- metric_dict = {}
- for i in range(len(self.metric_names)):
- metric_dict[self.metric_names[i]] = self.metrics[i](pred, target)
- return metric_dict
-
-
-def calc_celoss(pred, target):
- target = torch.argmax(target, 1).long()
- return nn.CrossEntropyLoss()(pred, target)
-
-
-class LPLoss(nn.Module):
-
- def __init__(self, loss_name):
- super().__init__()
- if loss_name == 'bce':
- self.loss_func = nn.BCEWithLogitsLoss()
- elif loss_name == 'ce':
- self.loss_func = calc_celoss
- elif loss_name == 'mse':
- self.loss_func = nn.MSELoss()
- else:
- raise ValueError(f'the loss func should be at least one of [bce, ce, mse]')
-
- def forward(self, pred, target):
- loss = self.loss_func(pred, target)
- return loss
-
\ No newline at end of file
diff --git a/spaces/ASJMO/freegpt/g4f/Provider/Providers/ChatFree.py b/spaces/ASJMO/freegpt/g4f/Provider/Providers/ChatFree.py
deleted file mode 100644
index 6bbaebaed35681026ff1eeb8eee3270e3b0741fd..0000000000000000000000000000000000000000
--- a/spaces/ASJMO/freegpt/g4f/Provider/Providers/ChatFree.py
+++ /dev/null
@@ -1,48 +0,0 @@
-import os, requests
-from ...typing import sha256, Dict, get_type_hints
-import json
-
-url = "https://v.chatfree.cc"
-model = ['gpt-3.5-turbo', 'gpt-3.5-turbo-16k']
-supports_stream = False
-needs_auth = False
-
-
-def _create_completion(model: str, messages: list, stream: bool, **kwargs):
- headers = {
- 'authority': 'chat.dfehub.com',
- 'accept': '*/*',
- 'accept-language': 'en,fr-FR;q=0.9,fr;q=0.8,es-ES;q=0.7,es;q=0.6,en-US;q=0.5,am;q=0.4,de;q=0.3',
- 'content-type': 'application/json',
- 'origin': 'https://v.chatfree.cc',
- 'referer': 'https://v.chatfree.cc/',
- 'sec-ch-ua': '"Not.A/Brand";v="8", "Chromium";v="114", "Google Chrome";v="114"',
- 'sec-ch-ua-mobile': '?0',
- 'sec-ch-ua-platform': '"macOS"',
- 'sec-fetch-dest': 'empty',
- 'sec-fetch-mode': 'cors',
- 'sec-fetch-site': 'same-origin',
- 'user-agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/114.0.0.0 Safari/537.36',
- 'x-requested-with': 'XMLHttpRequest',
- }
-
- json_data = {
- 'messages': messages,
- 'stream': True,
- 'model': model,
- 'temperature': 0.5,
- 'presence_penalty': 0,
- 'frequency_penalty': 0,
- 'top_p': 1,
- }
-
- response = requests.post('https://v.chatfree.cc/api/openai/v1/chat/completions',
- headers=headers, json=json_data)
-
- for chunk in response.iter_lines():
- if b'content' in chunk:
- data = json.loads(chunk.decode().split('data: ')[1])
- yield (data['choices'][0]['delta']['content'])
-
-params = f'g4f.Providers.{os.path.basename(__file__)[:-3]} supports: ' + \
- '(%s)' % ', '.join([f"{name}: {get_type_hints(_create_completion)[name].__name__}" for name in _create_completion.__code__.co_varnames[:_create_completion.__code__.co_argcount]])
\ No newline at end of file
diff --git a/spaces/Abhilashvj/planogram-compliance/inference.py b/spaces/Abhilashvj/planogram-compliance/inference.py
deleted file mode 100644
index 0a5dd1d98b9167432a4a352a2cd378bf7b74ae9c..0000000000000000000000000000000000000000
--- a/spaces/Abhilashvj/planogram-compliance/inference.py
+++ /dev/null
@@ -1,226 +0,0 @@
-# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
-"""
-Run YOLOv5 detection inference on images, videos, directories, globs, YouTube, webcam, streams, etc.
-
-Usage - sources:
- $ python detect.py --weights yolov5s.pt --source 0 # webcam
- img.jpg # image
- vid.mp4 # video
- screen # screenshot
- path/ # directory
- list.txt # list of images
- list.streams # list of streams
- 'path/*.jpg' # glob
- 'https://youtu.be/Zgi9g1ksQHc' # YouTube
- 'rtsp://example.com/media.mp4' # RTSP, RTMP, HTTP stream
-
-Usage - formats:
- $ python detect.py --weights yolov5s.pt # PyTorch
- yolov5s.torchscript # TorchScript
- yolov5s.onnx # ONNX Runtime or OpenCV DNN with --dnn
- yolov5s_openvino_model # OpenVINO
- yolov5s.engine # TensorRT
- yolov5s.mlmodel # CoreML (macOS-only)
- yolov5s_saved_model # TensorFlow SavedModel
- yolov5s.pb # TensorFlow GraphDef
- yolov5s.tflite # TensorFlow Lite
- yolov5s_edgetpu.tflite # TensorFlow Edge TPU
- yolov5s_paddle_model # PaddlePaddle
-"""
-
-import argparse
-import os
-import platform
-import sys
-from pathlib import Path
-
-import torch
-
-FILE = Path(__file__).resolve()
-ROOT = FILE.parents[0] # YOLOv5 root directory
-if str(ROOT) not in sys.path:
- sys.path.append(str(ROOT)) # add ROOT to PATH
-ROOT = Path(os.path.relpath(ROOT, Path.cwd())) # relative
-
-from models.common import DetectMultiBackend
-from utils.dataloaders import (
- IMG_FORMATS,
- VID_FORMATS,
- LoadImages,
- LoadScreenshots,
- LoadStreams,
-)
-from utils.general import (
- LOGGER,
- Profile,
- check_file,
- check_img_size,
- check_imshow,
- check_requirements,
- colorstr,
- cv2,
- increment_path,
- non_max_suppression,
- print_args,
- scale_boxes,
- strip_optimizer,
- xyxy2xywh,
-)
-from utils.plots import Annotator, colors, save_one_box
-from utils.torch_utils import select_device, smart_inference_mode
-
-
-@smart_inference_mode()
-def run(
- weights=ROOT / "yolov5s.pt", # model path or triton URL
- source=ROOT / "data/images", # file/dir/URL/glob/screen/0(webcam)
- data=ROOT / "data/coco128.yaml", # dataset.yaml path
- imgsz=(640, 640), # inference size (height, width)
- conf_thres=0.25, # confidence threshold
- iou_thres=0.45, # NMS IOU threshold
- max_det=1000, # maximum detections per image
- device="", # cuda device, i.e. 0 or 0,1,2,3 or cpu
- view_img=False, # show results
- save_txt=False, # save results to *.txt
- save_conf=False, # save confidences in --save-txt labels
- save_crop=False, # save cropped prediction boxes
- nosave=False, # do not save images/videos
- classes=None, # filter by class: --class 0, or --class 0 2 3
- agnostic_nms=False, # class-agnostic NMS
- augment=False, # augmented inference
- visualize=False, # visualize features
- update=False, # update all models
- project=ROOT / "runs/detect", # save results to project/name
- name="exp", # save results to project/name
- exist_ok=False, # existing project/name ok, do not increment
- line_thickness=3, # bounding box thickness (pixels)
- hide_labels=False, # hide labels
- hide_conf=False, # hide confidences
- half=False, # use FP16 half-precision inference
- dnn=False, # use OpenCV DNN for ONNX inference
- vid_stride=1, # video frame-rate stride
-):
- source = str(source)
- save_img = not nosave and not source.endswith(
- ".txt"
- ) # save inference images
- is_file = Path(source).suffix[1:] in (IMG_FORMATS + VID_FORMATS)
- is_url = source.lower().startswith(
- ("rtsp://", "rtmp://", "http://", "https://")
- )
- webcam = (
- source.isnumeric()
- or source.endswith(".streams")
- or (is_url and not is_file)
- )
- screenshot = source.lower().startswith("screen")
- if is_url and is_file:
- source = check_file(source) # download
-
- # Directories
- save_dir = increment_path(
- Path(project) / name, exist_ok=exist_ok
- ) # increment run
- (save_dir / "labels" if save_txt else save_dir).mkdir(
- parents=True, exist_ok=True
- ) # make dir
-
- # Load model
- device = select_device(device)
- model = DetectMultiBackend(
- weights, device=device, dnn=dnn, data=data, fp16=half
- )
- stride, names, pt = model.stride, model.names, model.pt
- imgsz = check_img_size(imgsz, s=stride) # check image size
-
- # Dataloader
- bs = 1 # batch_size
- if webcam:
- view_img = check_imshow(warn=True)
- dataset = LoadStreams(
- source,
- img_size=imgsz,
- stride=stride,
- auto=pt,
- vid_stride=vid_stride,
- )
- bs = len(dataset)
- elif screenshot:
- dataset = LoadScreenshots(
- source, img_size=imgsz, stride=stride, auto=pt
- )
- else:
- dataset = LoadImages(
- source,
- img_size=imgsz,
- stride=stride,
- auto=pt,
- vid_stride=vid_stride,
- )
- vid_path, vid_writer = [None] * bs, [None] * bs
-
- # Run inference
- model.warmup(imgsz=(1 if pt or model.triton else bs, 3, *imgsz)) # warmup
- seen, windows, dt = 0, [], (Profile(), Profile(), Profile())
- for path, im, im0s, vid_cap, s in dataset:
- with dt[0]:
- im = torch.from_numpy(im).to(model.device)
- im = im.half() if model.fp16 else im.float() # uint8 to fp16/32
- im /= 255 # 0 - 255 to 0.0 - 1.0
- if len(im.shape) == 3:
- im = im[None] # expand for batch dim
-
- # Inference
- with dt[1]:
- visualize = (
- increment_path(save_dir / Path(path).stem, mkdir=True)
- if visualize
- else False
- )
- pred = model(im, augment=augment, visualize=visualize)
-
- # NMS
- with dt[2]:
- pred = non_max_suppression(
- pred,
- conf_thres,
- iou_thres,
- classes,
- agnostic_nms,
- max_det=max_det,
- )
-
- # Second-stage classifier (optional)
- # pred = utils.general.apply_classifier(pred, classifier_model, im, im0s)
-
- # Process predictions
- for i, det in enumerate(pred): # per image
- seen += 1
- if webcam: # batch_size >= 1
- p, im0, frame = path[i], im0s[i].copy(), dataset.count
- s += f"{i}: "
- else:
- p, im0, frame = path, im0s.copy(), getattr(dataset, "frame", 0)
-
- p = Path(p) # to Path
- save_path = str(save_dir / p.name) # im.jpg
- txt_path = str(save_dir / "labels" / p.stem) + (
- "" if dataset.mode == "image" else f"_{frame}"
- ) # im.txt
- s += "%gx%g " % im.shape[2:] # print string
- gn = torch.tensor(im0.shape)[
- [1, 0, 1, 0]
- ] # normalization gain whwh
- imc = im0.copy() if save_crop else im0 # for save_crop
- annotator = Annotator(
- im0, line_width=line_thickness, example=str(names)
- )
- results = []
- if len(det):
- # Rescale boxes from img_size to im0 size
- det[:, :4] = scale_boxes(
- im.shape[2:], det[:, :4], im0.shape
- ).round()
- results.append((path, det))
-
- return results
diff --git a/spaces/AgentVerse/agentVerse/agentverse/environments/simulation_env/rules/describer/__init__.py b/spaces/AgentVerse/agentVerse/agentverse/environments/simulation_env/rules/describer/__init__.py
deleted file mode 100644
index 803d139202b46a3a2e1539cd2140ce6bad90f5f5..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/agentverse/environments/simulation_env/rules/describer/__init__.py
+++ /dev/null
@@ -1,9 +0,0 @@
-from agentverse.registry import Registry
-
-describer_registry = Registry(name="DescriberRegistry")
-
-from .base import BaseDescriber
-from .basic import BasicDescriber
-from .classroom import ClassroomDescriber
-from .pokemon import PokemonDescriber
-from .prisoner import PrisonerDescriber
diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/knob/TextObjectMethods.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/knob/TextObjectMethods.js
deleted file mode 100644
index b1638cb02c38be3ef60c37ec4beb3ab1ccff2337..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/knob/TextObjectMethods.js
+++ /dev/null
@@ -1,36 +0,0 @@
-var SetTextFormatCallback = function (callback, scope) {
- this.textFormatCallback = callback;
- this.textFormatCallbackScope = scope;
- return this;
-}
-
-var GetFormatText = function (value) {
- if (value === undefined) {
- value = this.value;
- }
-
- var text;
- if (this.textFormatCallbackScope) {
- text = this.textFormatCallback(value);
- } else {
- text = this.textFormatCallback.call(this.textFormatCallbackScope, value);
- }
- return text;
-}
-
-var UpdateText = function (value) {
- var textObject = this.sizerChildren.text;
- if (textObject && this.textFormatCallback) {
- textObject.setText(GetFormatText.call(this, value));
- if (textObject.layout) {
- textObject.layout();
- }
- }
- return this;
-}
-
-export default {
- setTextFormatCallback: SetTextFormatCallback,
- getFormatText: GetFormatText,
- updateText: UpdateText
-}
\ No newline at end of file
diff --git a/spaces/AlexWang/lama/bin/paper_runfiles/update_test_data_stats.sh b/spaces/AlexWang/lama/bin/paper_runfiles/update_test_data_stats.sh
deleted file mode 100644
index ff77d586f308202fbd019d8cc4be641f0d6aa1a5..0000000000000000000000000000000000000000
--- a/spaces/AlexWang/lama/bin/paper_runfiles/update_test_data_stats.sh
+++ /dev/null
@@ -1,30 +0,0 @@
-#!/usr/bin/env bash
-
-# paths to data are valid for mml7
-
-source "$(dirname $0)/env.sh"
-
-#INDIR="/data/inpainting/paper_data/Places365_val_test/test_large_30k"
-#
-#for dataset in random_medium_256 random_medium_512 random_thick_256 random_thick_512 random_thin_256 random_thin_512
-#do
-# "$BINDIR/calc_dataset_stats.py" "$INDIR/$dataset" "$INDIR/${dataset}_stats2"
-#done
-#
-#"$BINDIR/calc_dataset_stats.py" "/data/inpainting/evalset2" "/data/inpainting/evalset2_stats2"
-
-
-INDIR="/data/inpainting/paper_data/CelebA-HQ_val_test/test"
-
-for dataset in random_medium_256 random_thick_256 random_thin_256
-do
- "$BINDIR/calc_dataset_stats.py" "$INDIR/$dataset" "$INDIR/${dataset}_stats2"
-done
-
-
-INDIR="/data/inpainting/paper_data/Paris_StreetView_Dataset_val_256/paris_eval_gt"
-
-for dataset in random_medium_256 random_thick_256 random_thin_256
-do
- "$BINDIR/calc_dataset_stats.py" "$INDIR/$dataset" "$INDIR/${dataset}_stats2"
-done
\ No newline at end of file
diff --git a/spaces/Alpaca233/SadTalker/src/face3d/models/arcface_torch/configs/ms1mv3_r34.py b/spaces/Alpaca233/SadTalker/src/face3d/models/arcface_torch/configs/ms1mv3_r34.py
deleted file mode 100644
index 5f78337a3d1f9eb6e9145eb5093618796c6842d2..0000000000000000000000000000000000000000
--- a/spaces/Alpaca233/SadTalker/src/face3d/models/arcface_torch/configs/ms1mv3_r34.py
+++ /dev/null
@@ -1,26 +0,0 @@
-from easydict import EasyDict as edict
-
-# make training faster
-# our RAM is 256G
-# mount -t tmpfs -o size=140G tmpfs /train_tmp
-
-config = edict()
-config.loss = "arcface"
-config.network = "r34"
-config.resume = False
-config.output = None
-config.embedding_size = 512
-config.sample_rate = 1.0
-config.fp16 = True
-config.momentum = 0.9
-config.weight_decay = 5e-4
-config.batch_size = 128
-config.lr = 0.1 # batch size is 512
-
-config.rec = "/train_tmp/ms1m-retinaface-t1"
-config.num_classes = 93431
-config.num_image = 5179510
-config.num_epoch = 25
-config.warmup_epoch = -1
-config.decay_epoch = [10, 16, 22]
-config.val_targets = ["lfw", "cfp_fp", "agedb_30"]
diff --git a/spaces/Alpaca233/SadTalker/src/face3d/options/test_options.py b/spaces/Alpaca233/SadTalker/src/face3d/options/test_options.py
deleted file mode 100644
index 4ff3ad142779850d1d5a1640bc00f70d34d4a862..0000000000000000000000000000000000000000
--- a/spaces/Alpaca233/SadTalker/src/face3d/options/test_options.py
+++ /dev/null
@@ -1,21 +0,0 @@
-"""This script contains the test options for Deep3DFaceRecon_pytorch
-"""
-
-from .base_options import BaseOptions
-
-
-class TestOptions(BaseOptions):
- """This class includes test options.
-
- It also includes shared options defined in BaseOptions.
- """
-
- def initialize(self, parser):
- parser = BaseOptions.initialize(self, parser) # define shared options
- parser.add_argument('--phase', type=str, default='test', help='train, val, test, etc')
- parser.add_argument('--dataset_mode', type=str, default=None, help='chooses how datasets are loaded. [None | flist]')
- parser.add_argument('--img_folder', type=str, default='examples', help='folder for test images.')
-
- # Dropout and Batchnorm has different behavior during training and test.
- self.isTrain = False
- return parser
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/research_projects/lora/README.md b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/research_projects/lora/README.md
deleted file mode 100644
index b5d72403166f9b4017751c3d47f79a9eb3f535d8..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/research_projects/lora/README.md
+++ /dev/null
@@ -1,83 +0,0 @@
-# Stable Diffusion text-to-image fine-tuning
-This extended LoRA training script was authored by [haofanwang](https://github.com/haofanwang).
-This is an experimental LoRA extension of [this example](https://github.com/huggingface/diffusers/blob/main/examples/text_to_image/train_text_to_image_lora.py). We further support add LoRA layers for text encoder.
-
-## Training with LoRA
-
-Low-Rank Adaption of Large Language Models was first introduced by Microsoft in [LoRA: Low-Rank Adaptation of Large Language Models](https://arxiv.org/abs/2106.09685) by *Edward J. Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, Weizhu Chen*.
-
-In a nutshell, LoRA allows adapting pretrained models by adding pairs of rank-decomposition matrices to existing weights and **only** training those newly added weights. This has a couple of advantages:
-
-- Previous pretrained weights are kept frozen so that model is not prone to [catastrophic forgetting](https://www.pnas.org/doi/10.1073/pnas.1611835114).
-- Rank-decomposition matrices have significantly fewer parameters than original model, which means that trained LoRA weights are easily portable.
-- LoRA attention layers allow to control to which extent the model is adapted toward new training images via a `scale` parameter.
-
-[cloneofsimo](https://github.com/cloneofsimo) was the first to try out LoRA training for Stable Diffusion in the popular [lora](https://github.com/cloneofsimo/lora) GitHub repository.
-
-With LoRA, it's possible to fine-tune Stable Diffusion on a custom image-caption pair dataset
-on consumer GPUs like Tesla T4, Tesla V100.
-
-### Training
-
-First, you need to set up your development environment as is explained in the [installation section](#installing-the-dependencies). Make sure to set the `MODEL_NAME` and `DATASET_NAME` environment variables. Here, we will use [Stable Diffusion v1-4](https://hf.co/CompVis/stable-diffusion-v1-4) and the [Pokemons dataset](https://huggingface.co/datasets/lambdalabs/pokemon-blip-captions).
-
-**___Note: Change the `resolution` to 768 if you are using the [stable-diffusion-2](https://huggingface.co/stabilityai/stable-diffusion-2) 768x768 model.___**
-
-**___Note: It is quite useful to monitor the training progress by regularly generating sample images during training. [Weights and Biases](https://docs.wandb.ai/quickstart) is a nice solution to easily see generating images during training. All you need to do is to run `pip install wandb` before training to automatically log images.___**
-
-```bash
-export MODEL_NAME="CompVis/stable-diffusion-v1-4"
-export DATASET_NAME="lambdalabs/pokemon-blip-captions"
-```
-
-For this example we want to directly store the trained LoRA embeddings on the Hub, so
-we need to be logged in and add the `--push_to_hub` flag.
-
-```bash
-huggingface-cli login
-```
-
-Now we can start training!
-
-```bash
-accelerate launch --mixed_precision="fp16" train_text_to_image_lora.py \
- --pretrained_model_name_or_path=$MODEL_NAME \
- --dataset_name=$DATASET_NAME --caption_column="text" \
- --resolution=512 --random_flip \
- --train_batch_size=1 \
- --num_train_epochs=100 --checkpointing_steps=5000 \
- --learning_rate=1e-04 --lr_scheduler="constant" --lr_warmup_steps=0 \
- --seed=42 \
- --output_dir="sd-pokemon-model-lora" \
- --validation_prompt="cute dragon creature" --report_to="wandb"
- --use_peft \
- --lora_r=4 --lora_alpha=32 \
- --lora_text_encoder_r=4 --lora_text_encoder_alpha=32
-```
-
-The above command will also run inference as fine-tuning progresses and log the results to Weights and Biases.
-
-**___Note: When using LoRA we can use a much higher learning rate compared to non-LoRA fine-tuning. Here we use *1e-4* instead of the usual *1e-5*. Also, by using LoRA, it's possible to run `train_text_to_image_lora.py` in consumer GPUs like T4 or V100.___**
-
-The final LoRA embedding weights have been uploaded to [sayakpaul/sd-model-finetuned-lora-t4](https://huggingface.co/sayakpaul/sd-model-finetuned-lora-t4). **___Note: [The final weights](https://huggingface.co/sayakpaul/sd-model-finetuned-lora-t4/blob/main/pytorch_lora_weights.bin) are only 3 MB in size, which is orders of magnitudes smaller than the original model.___**
-
-You can check some inference samples that were logged during the course of the fine-tuning process [here](https://wandb.ai/sayakpaul/text2image-fine-tune/runs/q4lc0xsw).
-
-### Inference
-
-Once you have trained a model using above command, the inference can be done simply using the `StableDiffusionPipeline` after loading the trained LoRA weights. You
-need to pass the `output_dir` for loading the LoRA weights which, in this case, is `sd-pokemon-model-lora`.
-
-```python
-from diffusers import StableDiffusionPipeline
-import torch
-
-model_path = "sayakpaul/sd-model-finetuned-lora-t4"
-pipe = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", torch_dtype=torch.float16)
-pipe.unet.load_attn_procs(model_path)
-pipe.to("cuda")
-
-prompt = "A pokemon with green eyes and red legs."
-image = pipe(prompt, num_inference_steps=30, guidance_scale=7.5).images[0]
-image.save("pokemon.png")
-```
\ No newline at end of file
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/unclip/pipeline_unclip_image_variation.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/unclip/pipeline_unclip_image_variation.py
deleted file mode 100644
index f22ede9dede963d48264e5ee8ef76087e6879a8f..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/unclip/pipeline_unclip_image_variation.py
+++ /dev/null
@@ -1,417 +0,0 @@
-# Copyright 2023 Kakao Brain and The HuggingFace Team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import inspect
-from typing import List, Optional, Union
-
-import PIL
-import torch
-from torch.nn import functional as F
-from transformers import (
- CLIPImageProcessor,
- CLIPTextModelWithProjection,
- CLIPTokenizer,
- CLIPVisionModelWithProjection,
-)
-
-from ...models import UNet2DConditionModel, UNet2DModel
-from ...schedulers import UnCLIPScheduler
-from ...utils import logging, randn_tensor
-from ..pipeline_utils import DiffusionPipeline, ImagePipelineOutput
-from .text_proj import UnCLIPTextProjModel
-
-
-logger = logging.get_logger(__name__) # pylint: disable=invalid-name
-
-
-class UnCLIPImageVariationPipeline(DiffusionPipeline):
- """
- Pipeline to generate image variations from an input image using UnCLIP.
-
- This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods
- implemented for all pipelines (downloading, saving, running on a particular device, etc.).
-
- Args:
- text_encoder ([`~transformers.CLIPTextModelWithProjection`]):
- Frozen text-encoder.
- tokenizer ([`~transformers.CLIPTokenizer`]):
- A `CLIPTokenizer` to tokenize text.
- feature_extractor ([`~transformers.CLIPImageProcessor`]):
- Model that extracts features from generated images to be used as inputs for the `image_encoder`.
- image_encoder ([`~transformers.CLIPVisionModelWithProjection`]):
- Frozen CLIP image-encoder ([clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14)).
- text_proj ([`UnCLIPTextProjModel`]):
- Utility class to prepare and combine the embeddings before they are passed to the decoder.
- decoder ([`UNet2DConditionModel`]):
- The decoder to invert the image embedding into an image.
- super_res_first ([`UNet2DModel`]):
- Super resolution UNet. Used in all but the last step of the super resolution diffusion process.
- super_res_last ([`UNet2DModel`]):
- Super resolution UNet. Used in the last step of the super resolution diffusion process.
- decoder_scheduler ([`UnCLIPScheduler`]):
- Scheduler used in the decoder denoising process (a modified [`DDPMScheduler`]).
- super_res_scheduler ([`UnCLIPScheduler`]):
- Scheduler used in the super resolution denoising process (a modified [`DDPMScheduler`]).
- """
-
- decoder: UNet2DConditionModel
- text_proj: UnCLIPTextProjModel
- text_encoder: CLIPTextModelWithProjection
- tokenizer: CLIPTokenizer
- feature_extractor: CLIPImageProcessor
- image_encoder: CLIPVisionModelWithProjection
- super_res_first: UNet2DModel
- super_res_last: UNet2DModel
-
- decoder_scheduler: UnCLIPScheduler
- super_res_scheduler: UnCLIPScheduler
-
- def __init__(
- self,
- decoder: UNet2DConditionModel,
- text_encoder: CLIPTextModelWithProjection,
- tokenizer: CLIPTokenizer,
- text_proj: UnCLIPTextProjModel,
- feature_extractor: CLIPImageProcessor,
- image_encoder: CLIPVisionModelWithProjection,
- super_res_first: UNet2DModel,
- super_res_last: UNet2DModel,
- decoder_scheduler: UnCLIPScheduler,
- super_res_scheduler: UnCLIPScheduler,
- ):
- super().__init__()
-
- self.register_modules(
- decoder=decoder,
- text_encoder=text_encoder,
- tokenizer=tokenizer,
- text_proj=text_proj,
- feature_extractor=feature_extractor,
- image_encoder=image_encoder,
- super_res_first=super_res_first,
- super_res_last=super_res_last,
- decoder_scheduler=decoder_scheduler,
- super_res_scheduler=super_res_scheduler,
- )
-
- # Copied from diffusers.pipelines.unclip.pipeline_unclip.UnCLIPPipeline.prepare_latents
- def prepare_latents(self, shape, dtype, device, generator, latents, scheduler):
- if latents is None:
- latents = randn_tensor(shape, generator=generator, device=device, dtype=dtype)
- else:
- if latents.shape != shape:
- raise ValueError(f"Unexpected latents shape, got {latents.shape}, expected {shape}")
- latents = latents.to(device)
-
- latents = latents * scheduler.init_noise_sigma
- return latents
-
- def _encode_prompt(self, prompt, device, num_images_per_prompt, do_classifier_free_guidance):
- batch_size = len(prompt) if isinstance(prompt, list) else 1
-
- # get prompt text embeddings
- text_inputs = self.tokenizer(
- prompt,
- padding="max_length",
- max_length=self.tokenizer.model_max_length,
- return_tensors="pt",
- )
- text_input_ids = text_inputs.input_ids
- text_mask = text_inputs.attention_mask.bool().to(device)
- text_encoder_output = self.text_encoder(text_input_ids.to(device))
-
- prompt_embeds = text_encoder_output.text_embeds
- text_encoder_hidden_states = text_encoder_output.last_hidden_state
-
- prompt_embeds = prompt_embeds.repeat_interleave(num_images_per_prompt, dim=0)
- text_encoder_hidden_states = text_encoder_hidden_states.repeat_interleave(num_images_per_prompt, dim=0)
- text_mask = text_mask.repeat_interleave(num_images_per_prompt, dim=0)
-
- if do_classifier_free_guidance:
- uncond_tokens = [""] * batch_size
-
- max_length = text_input_ids.shape[-1]
- uncond_input = self.tokenizer(
- uncond_tokens,
- padding="max_length",
- max_length=max_length,
- truncation=True,
- return_tensors="pt",
- )
- uncond_text_mask = uncond_input.attention_mask.bool().to(device)
- negative_prompt_embeds_text_encoder_output = self.text_encoder(uncond_input.input_ids.to(device))
-
- negative_prompt_embeds = negative_prompt_embeds_text_encoder_output.text_embeds
- uncond_text_encoder_hidden_states = negative_prompt_embeds_text_encoder_output.last_hidden_state
-
- # duplicate unconditional embeddings for each generation per prompt, using mps friendly method
-
- seq_len = negative_prompt_embeds.shape[1]
- negative_prompt_embeds = negative_prompt_embeds.repeat(1, num_images_per_prompt)
- negative_prompt_embeds = negative_prompt_embeds.view(batch_size * num_images_per_prompt, seq_len)
-
- seq_len = uncond_text_encoder_hidden_states.shape[1]
- uncond_text_encoder_hidden_states = uncond_text_encoder_hidden_states.repeat(1, num_images_per_prompt, 1)
- uncond_text_encoder_hidden_states = uncond_text_encoder_hidden_states.view(
- batch_size * num_images_per_prompt, seq_len, -1
- )
- uncond_text_mask = uncond_text_mask.repeat_interleave(num_images_per_prompt, dim=0)
-
- # done duplicates
-
- # For classifier free guidance, we need to do two forward passes.
- # Here we concatenate the unconditional and text embeddings into a single batch
- # to avoid doing two forward passes
- prompt_embeds = torch.cat([negative_prompt_embeds, prompt_embeds])
- text_encoder_hidden_states = torch.cat([uncond_text_encoder_hidden_states, text_encoder_hidden_states])
-
- text_mask = torch.cat([uncond_text_mask, text_mask])
-
- return prompt_embeds, text_encoder_hidden_states, text_mask
-
- def _encode_image(self, image, device, num_images_per_prompt, image_embeddings: Optional[torch.Tensor] = None):
- dtype = next(self.image_encoder.parameters()).dtype
-
- if image_embeddings is None:
- if not isinstance(image, torch.Tensor):
- image = self.feature_extractor(images=image, return_tensors="pt").pixel_values
-
- image = image.to(device=device, dtype=dtype)
- image_embeddings = self.image_encoder(image).image_embeds
-
- image_embeddings = image_embeddings.repeat_interleave(num_images_per_prompt, dim=0)
-
- return image_embeddings
-
- @torch.no_grad()
- def __call__(
- self,
- image: Optional[Union[PIL.Image.Image, List[PIL.Image.Image], torch.FloatTensor]] = None,
- num_images_per_prompt: int = 1,
- decoder_num_inference_steps: int = 25,
- super_res_num_inference_steps: int = 7,
- generator: Optional[torch.Generator] = None,
- decoder_latents: Optional[torch.FloatTensor] = None,
- super_res_latents: Optional[torch.FloatTensor] = None,
- image_embeddings: Optional[torch.Tensor] = None,
- decoder_guidance_scale: float = 8.0,
- output_type: Optional[str] = "pil",
- return_dict: bool = True,
- ):
- """
- The call function to the pipeline for generation.
-
- Args:
- image (`PIL.Image.Image` or `List[PIL.Image.Image]` or `torch.FloatTensor`):
- `Image` or tensor representing an image batch to be used as the starting point. If you provide a
- tensor, it needs to be compatible with the [`CLIPImageProcessor`]
- [configuration](https://huggingface.co/fusing/karlo-image-variations-diffusers/blob/main/feature_extractor/preprocessor_config.json).
- Can be left as `None` only when `image_embeddings` are passed.
- num_images_per_prompt (`int`, *optional*, defaults to 1):
- The number of images to generate per prompt.
- decoder_num_inference_steps (`int`, *optional*, defaults to 25):
- The number of denoising steps for the decoder. More denoising steps usually lead to a higher quality
- image at the expense of slower inference.
- super_res_num_inference_steps (`int`, *optional*, defaults to 7):
- The number of denoising steps for super resolution. More denoising steps usually lead to a higher
- quality image at the expense of slower inference.
- generator (`torch.Generator`, *optional*):
- A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make
- generation deterministic.
- decoder_latents (`torch.FloatTensor` of shape (batch size, channels, height, width), *optional*):
- Pre-generated noisy latents to be used as inputs for the decoder.
- super_res_latents (`torch.FloatTensor` of shape (batch size, channels, super res height, super res width), *optional*):
- Pre-generated noisy latents to be used as inputs for the decoder.
- decoder_guidance_scale (`float`, *optional*, defaults to 4.0):
- A higher guidance scale value encourages the model to generate images closely linked to the text
- `prompt` at the expense of lower image quality. Guidance scale is enabled when `guidance_scale > 1`.
- image_embeddings (`torch.Tensor`, *optional*):
- Pre-defined image embeddings that can be derived from the image encoder. Pre-defined image embeddings
- can be passed for tasks like image interpolations. `image` can be left as `None`.
- output_type (`str`, *optional*, defaults to `"pil"`):
- The output format of the generated image. Choose between `PIL.Image` or `np.array`.
- return_dict (`bool`, *optional*, defaults to `True`):
- Whether or not to return a [`~pipelines.ImagePipelineOutput`] instead of a plain tuple.
-
- Returns:
- [`~pipelines.ImagePipelineOutput`] or `tuple`:
- If `return_dict` is `True`, [`~pipelines.ImagePipelineOutput`] is returned, otherwise a `tuple` is
- returned where the first element is a list with the generated images.
- """
- if image is not None:
- if isinstance(image, PIL.Image.Image):
- batch_size = 1
- elif isinstance(image, list):
- batch_size = len(image)
- else:
- batch_size = image.shape[0]
- else:
- batch_size = image_embeddings.shape[0]
-
- prompt = [""] * batch_size
-
- device = self._execution_device
-
- batch_size = batch_size * num_images_per_prompt
-
- do_classifier_free_guidance = decoder_guidance_scale > 1.0
-
- prompt_embeds, text_encoder_hidden_states, text_mask = self._encode_prompt(
- prompt, device, num_images_per_prompt, do_classifier_free_guidance
- )
-
- image_embeddings = self._encode_image(image, device, num_images_per_prompt, image_embeddings)
-
- # decoder
- text_encoder_hidden_states, additive_clip_time_embeddings = self.text_proj(
- image_embeddings=image_embeddings,
- prompt_embeds=prompt_embeds,
- text_encoder_hidden_states=text_encoder_hidden_states,
- do_classifier_free_guidance=do_classifier_free_guidance,
- )
-
- if device.type == "mps":
- # HACK: MPS: There is a panic when padding bool tensors,
- # so cast to int tensor for the pad and back to bool afterwards
- text_mask = text_mask.type(torch.int)
- decoder_text_mask = F.pad(text_mask, (self.text_proj.clip_extra_context_tokens, 0), value=1)
- decoder_text_mask = decoder_text_mask.type(torch.bool)
- else:
- decoder_text_mask = F.pad(text_mask, (self.text_proj.clip_extra_context_tokens, 0), value=True)
-
- self.decoder_scheduler.set_timesteps(decoder_num_inference_steps, device=device)
- decoder_timesteps_tensor = self.decoder_scheduler.timesteps
-
- num_channels_latents = self.decoder.config.in_channels
- height = self.decoder.config.sample_size
- width = self.decoder.config.sample_size
-
- if decoder_latents is None:
- decoder_latents = self.prepare_latents(
- (batch_size, num_channels_latents, height, width),
- text_encoder_hidden_states.dtype,
- device,
- generator,
- decoder_latents,
- self.decoder_scheduler,
- )
-
- for i, t in enumerate(self.progress_bar(decoder_timesteps_tensor)):
- # expand the latents if we are doing classifier free guidance
- latent_model_input = torch.cat([decoder_latents] * 2) if do_classifier_free_guidance else decoder_latents
-
- noise_pred = self.decoder(
- sample=latent_model_input,
- timestep=t,
- encoder_hidden_states=text_encoder_hidden_states,
- class_labels=additive_clip_time_embeddings,
- attention_mask=decoder_text_mask,
- ).sample
-
- if do_classifier_free_guidance:
- noise_pred_uncond, noise_pred_text = noise_pred.chunk(2)
- noise_pred_uncond, _ = noise_pred_uncond.split(latent_model_input.shape[1], dim=1)
- noise_pred_text, predicted_variance = noise_pred_text.split(latent_model_input.shape[1], dim=1)
- noise_pred = noise_pred_uncond + decoder_guidance_scale * (noise_pred_text - noise_pred_uncond)
- noise_pred = torch.cat([noise_pred, predicted_variance], dim=1)
-
- if i + 1 == decoder_timesteps_tensor.shape[0]:
- prev_timestep = None
- else:
- prev_timestep = decoder_timesteps_tensor[i + 1]
-
- # compute the previous noisy sample x_t -> x_t-1
- decoder_latents = self.decoder_scheduler.step(
- noise_pred, t, decoder_latents, prev_timestep=prev_timestep, generator=generator
- ).prev_sample
-
- decoder_latents = decoder_latents.clamp(-1, 1)
-
- image_small = decoder_latents
-
- # done decoder
-
- # super res
-
- self.super_res_scheduler.set_timesteps(super_res_num_inference_steps, device=device)
- super_res_timesteps_tensor = self.super_res_scheduler.timesteps
-
- channels = self.super_res_first.config.in_channels // 2
- height = self.super_res_first.config.sample_size
- width = self.super_res_first.config.sample_size
-
- if super_res_latents is None:
- super_res_latents = self.prepare_latents(
- (batch_size, channels, height, width),
- image_small.dtype,
- device,
- generator,
- super_res_latents,
- self.super_res_scheduler,
- )
-
- if device.type == "mps":
- # MPS does not support many interpolations
- image_upscaled = F.interpolate(image_small, size=[height, width])
- else:
- interpolate_antialias = {}
- if "antialias" in inspect.signature(F.interpolate).parameters:
- interpolate_antialias["antialias"] = True
-
- image_upscaled = F.interpolate(
- image_small, size=[height, width], mode="bicubic", align_corners=False, **interpolate_antialias
- )
-
- for i, t in enumerate(self.progress_bar(super_res_timesteps_tensor)):
- # no classifier free guidance
-
- if i == super_res_timesteps_tensor.shape[0] - 1:
- unet = self.super_res_last
- else:
- unet = self.super_res_first
-
- latent_model_input = torch.cat([super_res_latents, image_upscaled], dim=1)
-
- noise_pred = unet(
- sample=latent_model_input,
- timestep=t,
- ).sample
-
- if i + 1 == super_res_timesteps_tensor.shape[0]:
- prev_timestep = None
- else:
- prev_timestep = super_res_timesteps_tensor[i + 1]
-
- # compute the previous noisy sample x_t -> x_t-1
- super_res_latents = self.super_res_scheduler.step(
- noise_pred, t, super_res_latents, prev_timestep=prev_timestep, generator=generator
- ).prev_sample
-
- image = super_res_latents
-
- # done super res
-
- # post processing
-
- image = image * 0.5 + 0.5
- image = image.clamp(0, 1)
- image = image.cpu().permute(0, 2, 3, 1).float().numpy()
-
- if output_type == "pil":
- image = self.numpy_to_pil(image)
-
- if not return_dict:
- return (image,)
-
- return ImagePipelineOutput(images=image)
diff --git a/spaces/Andy1621/uniformer_image_detection/configs/fcos/fcos_r50_caffe_fpn_gn-head_1x_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/fcos/fcos_r50_caffe_fpn_gn-head_1x_coco.py
deleted file mode 100644
index 6e124116bcfa9358613507f74ebadb162d8c86a9..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_detection/configs/fcos/fcos_r50_caffe_fpn_gn-head_1x_coco.py
+++ /dev/null
@@ -1,105 +0,0 @@
-_base_ = [
- '../_base_/datasets/coco_detection.py',
- '../_base_/schedules/schedule_1x.py', '../_base_/default_runtime.py'
-]
-# model settings
-model = dict(
- type='FCOS',
- pretrained='open-mmlab://detectron/resnet50_caffe',
- backbone=dict(
- type='ResNet',
- depth=50,
- num_stages=4,
- out_indices=(0, 1, 2, 3),
- frozen_stages=1,
- norm_cfg=dict(type='BN', requires_grad=False),
- norm_eval=True,
- style='caffe'),
- neck=dict(
- type='FPN',
- in_channels=[256, 512, 1024, 2048],
- out_channels=256,
- start_level=1,
- add_extra_convs=True,
- extra_convs_on_inputs=False, # use P5
- num_outs=5,
- relu_before_extra_convs=True),
- bbox_head=dict(
- type='FCOSHead',
- num_classes=80,
- in_channels=256,
- stacked_convs=4,
- feat_channels=256,
- strides=[8, 16, 32, 64, 128],
- loss_cls=dict(
- type='FocalLoss',
- use_sigmoid=True,
- gamma=2.0,
- alpha=0.25,
- loss_weight=1.0),
- loss_bbox=dict(type='IoULoss', loss_weight=1.0),
- loss_centerness=dict(
- type='CrossEntropyLoss', use_sigmoid=True, loss_weight=1.0)),
- # training and testing settings
- train_cfg=dict(
- assigner=dict(
- type='MaxIoUAssigner',
- pos_iou_thr=0.5,
- neg_iou_thr=0.4,
- min_pos_iou=0,
- ignore_iof_thr=-1),
- allowed_border=-1,
- pos_weight=-1,
- debug=False),
- test_cfg=dict(
- nms_pre=1000,
- min_bbox_size=0,
- score_thr=0.05,
- nms=dict(type='nms', iou_threshold=0.5),
- max_per_img=100))
-img_norm_cfg = dict(
- mean=[102.9801, 115.9465, 122.7717], std=[1.0, 1.0, 1.0], to_rgb=False)
-train_pipeline = [
- dict(type='LoadImageFromFile'),
- dict(type='LoadAnnotations', with_bbox=True),
- dict(type='Resize', img_scale=(1333, 800), keep_ratio=True),
- dict(type='RandomFlip', flip_ratio=0.5),
- dict(type='Normalize', **img_norm_cfg),
- dict(type='Pad', size_divisor=32),
- dict(type='DefaultFormatBundle'),
- dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels']),
-]
-test_pipeline = [
- dict(type='LoadImageFromFile'),
- dict(
- type='MultiScaleFlipAug',
- img_scale=(1333, 800),
- flip=False,
- transforms=[
- dict(type='Resize', keep_ratio=True),
- dict(type='RandomFlip'),
- dict(type='Normalize', **img_norm_cfg),
- dict(type='Pad', size_divisor=32),
- dict(type='ImageToTensor', keys=['img']),
- dict(type='Collect', keys=['img']),
- ])
-]
-data = dict(
- samples_per_gpu=2,
- workers_per_gpu=2,
- train=dict(pipeline=train_pipeline),
- val=dict(pipeline=test_pipeline),
- test=dict(pipeline=test_pipeline))
-# optimizer
-optimizer = dict(
- lr=0.01, paramwise_cfg=dict(bias_lr_mult=2., bias_decay_mult=0.))
-optimizer_config = dict(
- _delete_=True, grad_clip=dict(max_norm=35, norm_type=2))
-# learning policy
-lr_config = dict(
- policy='step',
- warmup='constant',
- warmup_iters=500,
- warmup_ratio=1.0 / 3,
- step=[8, 11])
-runner = dict(type='EpochBasedRunner', max_epochs=12)
diff --git a/spaces/Andy1621/uniformer_image_detection/configs/reppoints/README.md b/spaces/Andy1621/uniformer_image_detection/configs/reppoints/README.md
deleted file mode 100644
index 2ab22cd8e83151b5f028df96641fea5bfe6caa7a..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_detection/configs/reppoints/README.md
+++ /dev/null
@@ -1,54 +0,0 @@
-# RepPoints: Point Set Representation for Object Detection
-
-By [Ze Yang](https://yangze.tech/), [Shaohui Liu](http://b1ueber2y.me/), and [Han Hu](https://ancientmooner.github.io/).
-
-We provide code support and configuration files to reproduce the results in the paper for
-["RepPoints: Point Set Representation for Object Detection"](https://arxiv.org/abs/1904.11490) on COCO object detection.
-
-## Introduction
-
-[ALGORITHM]
-
-**RepPoints**, initially described in [arXiv](https://arxiv.org/abs/1904.11490), is a new representation method for visual objects, on which visual understanding tasks are typically centered. Visual object representation, aiming at both geometric description and appearance feature extraction, is conventionally achieved by `bounding box + RoIPool (RoIAlign)`. The bounding box representation is convenient to use; however, it provides only a rectangular localization of objects that lacks geometric precision and may consequently degrade feature quality. Our new representation, RepPoints, models objects by a `point set` instead of a `bounding box`, which learns to adaptively position themselves over an object in a manner that circumscribes the object’s `spatial extent` and enables `semantically aligned feature extraction`. This richer and more flexible representation maintains the convenience of bounding boxes while facilitating various visual understanding applications. This repo demonstrated the effectiveness of RepPoints for COCO object detection.
-
-Another feature of this repo is the demonstration of an `anchor-free detector`, which can be as effective as state-of-the-art anchor-based detection methods. The anchor-free detector can utilize either `bounding box` or `RepPoints` as the basic object representation.
-
-
-
-
Learning RepPoints in Object Detection.
-
-
-## Citing RepPoints
-
-```
-@inproceedings{yang2019reppoints,
- title={RepPoints: Point Set Representation for Object Detection},
- author={Yang, Ze and Liu, Shaohui and Hu, Han and Wang, Liwei and Lin, Stephen},
- booktitle={The IEEE International Conference on Computer Vision (ICCV)},
- month={Oct},
- year={2019}
-}
-```
-
-## Results and models
-
-The results on COCO 2017val are shown in the table below.
-
-| Method | Backbone | GN | Anchor | convert func | Lr schd | Mem (GB) | Inf time (fps) | box AP | Config | Download |
-|:---------:|:-------------:|:---:|:------:|:------------:|:-------:|:--------:|:--------------:|:------:|:------:|:--------:|
-| BBox | R-50-FPN | Y | single | - | 1x | 3.9 | 15.9 | 36.4 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/reppoints/bbox_r50_grid_fpn_gn-neck+head_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/reppoints/bbox_r50_grid_fpn_gn-neck%2Bhead_1x_coco/bbox_r50_grid_fpn_gn-neck%2Bhead_1x_coco_20200329-c98bfa96.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/reppoints/bbox_r50_grid_fpn_gn-neck%2Bhead_1x_coco/bbox_r50_grid_fpn_gn-neck%2Bhead_1x_coco_20200329_145916.log.json) |
-| BBox | R-50-FPN | Y | none | - | 1x | 3.9 | 15.4 | 37.4 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/reppoints/bbox_r50_grid_center_fpn_gn-neck+Bhead_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/reppoints/bbox_r50_grid_center_fpn_gn-neck%2Bhead_1x_coco/bbox_r50_grid_center_fpn_gn-neck%2Bhead_1x_coco_20200330-00f73d58.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/reppoints/bbox_r50_grid_center_fpn_gn-neck%2Bhead_1x_coco/bbox_r50_grid_center_fpn_gn-neck%2Bhead_1x_coco_20200330_233609.log.json) |
-| RepPoints | R-50-FPN | N | none | moment | 1x | 3.3 | 18.5 | 37.0 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/reppoints/reppoints_moment_r50_fpn_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/reppoints/reppoints_moment_r50_fpn_1x_coco/reppoints_moment_r50_fpn_1x_coco_20200330-b73db8d1.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/reppoints/reppoints_moment_r50_fpn_1x_coco/reppoints_moment_r50_fpn_1x_coco_20200330_233609.log.json) |
-| RepPoints | R-50-FPN | Y | none | moment | 1x | 3.9 | 17.5 | 38.1 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/reppoints/reppoints_moment_r50_fpn_gn-neck%2Bhead_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/reppoints/reppoints_moment_r50_fpn_gn-neck%2Bhead_1x_coco/reppoints_moment_r50_fpn_gn-neck%2Bhead_1x_coco_20200329-4b38409a.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/reppoints/reppoints_moment_r50_fpn_gn-neck%2Bhead_1x_coco/reppoints_moment_r50_fpn_gn-neck%2Bhead_1x_coco_20200329_145952.log.json) |
-| RepPoints | R-50-FPN | Y | none | moment | 2x | 3.9 | - | 38.6 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/reppoints/reppoints_moment_r50_fpn_gn-neck+head_2x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/reppoints/reppoints_moment_r50_fpn_gn-neck%2Bhead_2x_coco/reppoints_moment_r50_fpn_gn-neck%2Bhead_2x_coco_20200329-91babaa2.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/reppoints/reppoints_moment_r50_fpn_gn-neck%2Bhead_2x_coco/reppoints_moment_r50_fpn_gn-neck%2Bhead_2x_coco_20200329_150020.log.json) |
-| RepPoints | R-101-FPN | Y | none | moment | 2x | 5.8 | 13.7 | 40.5 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/reppoints/reppoints_moment_r101_fpn_gn-neck+head_2x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/reppoints/reppoints_moment_r101_fpn_gn-neck%2Bhead_2x_coco/reppoints_moment_r101_fpn_gn-neck%2Bhead_2x_coco_20200329-4fbc7310.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/reppoints/reppoints_moment_r101_fpn_gn-neck%2Bhead_2x_coco/reppoints_moment_r101_fpn_gn-neck%2Bhead_2x_coco_20200329_132205.log.json) |
-| RepPoints | R-101-FPN-DCN | Y | none | moment | 2x | 5.9 | 12.1 | 42.9 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/reppoints/reppoints_moment_r101_fpn_dconv_c3-c5_gn-neck+head_2x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/reppoints/reppoints_moment_r101_fpn_dconv_c3-c5_gn-neck%2Bhead_2x_coco/reppoints_moment_r101_fpn_dconv_c3-c5_gn-neck%2Bhead_2x_coco_20200329-3309fbf2.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/reppoints/reppoints_moment_r101_fpn_dconv_c3-c5_gn-neck%2Bhead_2x_coco/reppoints_moment_r101_fpn_dconv_c3-c5_gn-neck%2Bhead_2x_coco_20200329_132134.log.json) |
-| RepPoints | X-101-FPN-DCN | Y | none | moment | 2x | 7.1 | 9.3 | 44.2 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/reppoints/reppoints_moment_x101_fpn_dconv_c3-c5_gn-neck+head_2x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/reppoints/reppoints_moment_x101_fpn_dconv_c3-c5_gn-neck%2Bhead_2x_coco/reppoints_moment_x101_fpn_dconv_c3-c5_gn-neck%2Bhead_2x_coco_20200329-f87da1ea.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/reppoints/reppoints_moment_x101_fpn_dconv_c3-c5_gn-neck%2Bhead_2x_coco/reppoints_moment_x101_fpn_dconv_c3-c5_gn-neck%2Bhead_2x_coco_20200329_132201.log.json) |
-
-**Notes:**
-
-- `R-xx`, `X-xx` denote the ResNet and ResNeXt architectures, respectively.
-- `DCN` denotes replacing 3x3 conv with the 3x3 deformable convolution in `c3-c5` stages of backbone.
-- `none` in the `anchor` column means 2-d `center point` (x,y) is used to represent the initial object hypothesis. `single` denotes one 4-d anchor box (x,y,w,h) with IoU based label assign criterion is adopted.
-- `moment`, `partial MinMax`, `MinMax` in the `convert func` column are three functions to convert a point set to a pseudo box.
-- Note the results here are slightly different from those reported in the paper, due to framework change. While the original paper uses an [MXNet](https://mxnet.apache.org/) implementation, we re-implement the method in [PyTorch](https://pytorch.org/) based on mmdetection.
diff --git a/spaces/Andy1621/uniformer_image_detection/configs/rpn/rpn_r50_caffe_fpn_1x_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/rpn/rpn_r50_caffe_fpn_1x_coco.py
deleted file mode 100644
index 398f3c14db1d63343b08bd5280d69aaae6c70a99..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_detection/configs/rpn/rpn_r50_caffe_fpn_1x_coco.py
+++ /dev/null
@@ -1,37 +0,0 @@
-_base_ = './rpn_r50_fpn_1x_coco.py'
-model = dict(
- pretrained='open-mmlab://detectron2/resnet50_caffe',
- backbone=dict(
- norm_cfg=dict(requires_grad=False), norm_eval=True, style='caffe'))
-# use caffe img_norm
-img_norm_cfg = dict(
- mean=[103.530, 116.280, 123.675], std=[1.0, 1.0, 1.0], to_rgb=False)
-train_pipeline = [
- dict(type='LoadImageFromFile'),
- dict(type='LoadAnnotations', with_bbox=True, with_label=False),
- dict(type='Resize', img_scale=(1333, 800), keep_ratio=True),
- dict(type='RandomFlip', flip_ratio=0.5),
- dict(type='Normalize', **img_norm_cfg),
- dict(type='Pad', size_divisor=32),
- dict(type='DefaultFormatBundle'),
- dict(type='Collect', keys=['img', 'gt_bboxes']),
-]
-test_pipeline = [
- dict(type='LoadImageFromFile'),
- dict(
- type='MultiScaleFlipAug',
- img_scale=(1333, 800),
- flip=False,
- transforms=[
- dict(type='Resize', keep_ratio=True),
- dict(type='RandomFlip'),
- dict(type='Normalize', **img_norm_cfg),
- dict(type='Pad', size_divisor=32),
- dict(type='ImageToTensor', keys=['img']),
- dict(type='Collect', keys=['img']),
- ])
-]
-data = dict(
- train=dict(pipeline=train_pipeline),
- val=dict(pipeline=test_pipeline),
- test=dict(pipeline=test_pipeline))
diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/deeplabv3plus/deeplabv3plus_r101-d8_480x480_40k_pascal_context_59.py b/spaces/Andy1621/uniformer_image_segmentation/configs/deeplabv3plus/deeplabv3plus_r101-d8_480x480_40k_pascal_context_59.py
deleted file mode 100644
index 36a510ff41788a5861b5a9504d8e3d08502072e4..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_segmentation/configs/deeplabv3plus/deeplabv3plus_r101-d8_480x480_40k_pascal_context_59.py
+++ /dev/null
@@ -1,2 +0,0 @@
-_base_ = './deeplabv3plus_r50-d8_480x480_40k_pascal_context_59.py'
-model = dict(pretrained='open-mmlab://resnet101_v1c', backbone=dict(depth=101))
diff --git a/spaces/AnimalEquality/chatbot/_proc/_docs/app.html b/spaces/AnimalEquality/chatbot/_proc/_docs/app.html
deleted file mode 100644
index 61251d6e3a3b814214a346244bf2060f28dc3c15..0000000000000000000000000000000000000000
--- a/spaces/AnimalEquality/chatbot/_proc/_docs/app.html
+++ /dev/null
@@ -1,660 +0,0 @@
-
-
-
-
-
-
-
-
-
-
-lv-recipe-chatbot - app
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
from dotenv import load_dotenv
-
-
-
#: eval: false
- load_dotenv()
-
-
-
-Put the chat backend pieces together
-
-
-ConversationBufferMemory
-
- ConversationBufferMemory
- (chat_memory:langchain.schema.memory.BaseChatMe
- ssageHistory=None,
- output_key:Optional[str]=None,
- input_key:Optional[str]=None,
- return_messages:bool=False,
- human_prefix:str='Human', ai_prefix:str='AI',
- memory_key:str='history')
-
-Buffer for storing conversation memory.
-
-
-
-ChatMessageHistory
-
- ChatMessageHistory
- (messages:List[langchain.schema.messages.BaseMessage]
- =[])
-
-In memory implementation of chat message history.
-Stores messages in an in memory list.
-
-
-
-ChatOpenAI
-
- ChatOpenAI (cache:Optional[bool]=None, verbose:bool=None, callbacks:Union
- [List[langchain.callbacks.base.BaseCallbackHandler],langchain
- .callbacks.base.BaseCallbackManager,NoneType]=None, callback_
- manager:Optional[langchain.callbacks.base.BaseCallbackManager
- ]=None, tags:Optional[List[str]]=None,
- metadata:Optional[Dict[str,Any]]=None, client:Any=None,
- model:str='gpt-3.5-turbo', temperature:float=0.7,
- model_kwargs:Dict[str,Any]=None,
- openai_api_key:Optional[str]=None,
- openai_api_base:Optional[str]=None,
- openai_organization:Optional[str]=None,
- openai_proxy:Optional[str]=None, request_timeout:Union[float,
- Tuple[float,float],NoneType]=None, max_retries:int=6,
- streaming:bool=False, n:int=1, max_tokens:Optional[int]=None,
- tiktoken_model_name:Optional[str]=None)
-
-Wrapper around OpenAI Chat large language models.
-To use, you should have the openai
python package installed, and the environment variable OPENAI_API_KEY
set with your API key.
-Any parameters that are valid to be passed to the openai.create call can be passed in, even if not explicitly saved on this class.
-Example: .. code-block:: python
- from langchain.chat_models import ChatOpenAI
- openai = ChatOpenAI(model_name="gpt-3.5-turbo")
-
-
llm = ChatOpenAI(temperature= 1 )
- MEMORY_KEY = "chat_history"
- chat_msgs = INIT_PROMPT.format_prompt(
- ingredients= "tofu, pickles, mustard, olives, tomatoes, lettuce, bell peppers, carrots, bread" ,
- allergies= "" ,
- recipe_freeform_input= "The preparation time shVegan spaghetti aglio e olio ould be less than 30 minutes. I really love Thai food!" ,
- )
- chat_msgs = chat_msgs.to_messages()
- results = llm.generate([chat_msgs])
-
- chat_msgs.append(results.generations[0 ][0 ].message)
- tools = [vegan_recipe_edamam_search]
- prompt = OpenAIFunctionsAgent.create_prompt(
- system_message= INIT_PROMPT.messages[0 ],
- extra_prompt_messages= chat_msgs + [MessagesPlaceholder(variable_name= MEMORY_KEY)],
- )
- memory = ConversationBufferMemory(
- chat_memory= ChatMessageHistory(messages= chat_msgs),
- return_messages= True ,
- memory_key= MEMORY_KEY,
- )
- agent_executor = AgentExecutor(
- agent= OpenAIFunctionsAgent(llm= llm, tools= tools, prompt= prompt),
- tools= tools,
- memory= memory,
- verbose= True ,
- )
-
-
-
memory.chat_memory.messages
-
-
[SystemMessage(content='The following is a conversation between a human and a friendly vegan AI. \nThe AI is compassionate to animals and therefore serves to promote veganism.\nKnowledge: A vegan diet implies a plant-based diet avoiding all animal foods such as meat (including fish, shellfish and insects), dairy, eggs and honey.\nIf the human messages are not aligned with veganism, remind them of your purpose.\nThe AI never generates vegan recipes itself but instead uses a tool.', additional_kwargs={}),
- AIMessage(content='What ingredients do you wish to cook with?', additional_kwargs={}, example=False),
- HumanMessage(content='Ingredients: tofu, pickles, mustard, olives, tomatoes, lettuce, bell peppers, carrots, bread', additional_kwargs={}, example=False),
- AIMessage(content='Do you have any allergies I should be aware of?', additional_kwargs={}, example=False),
- HumanMessage(content='Allergies: ', additional_kwargs={}, example=False),
- AIMessage(content='Do you have any preferences I should consider for the recipe such as preparation time, difficulty, or cuisine region?', additional_kwargs={}, example=False),
- HumanMessage(content="Preferences: `The preparation time shVegan spaghetti aglio e olio ould be less than 30 minutes. I really love Thai food!`\nYour task is compose a concise, 6 word max vegan recipe keyword query to use in an API search.\nThink step by step.\n\n1. If the user listed any ingredients, choose the three ingredients that are most commonly used together in recipes that fall within the user's preferences (if any are included). \n2. If the user provided any allergies, include them in the query.\nFormat your response as message with the allergy and diet preferences first and then the ingredients.\nExamples:\n'Vegan gluten-free chicken peppers' or 'Vegan tofu, brocolli, and miso'", additional_kwargs={}, example=False),
- AIMessage(content='Vegan, quick, Thai tofu, bell peppers', additional_kwargs={}, example=False)]
-
-
-
-
agent_executor.run("Search for vegan recipe" )
-
-
-
-> Entering new AgentExecutor chain...
-
-Invoking: `vegan_recipe_edamam_search` with `{'query': 'Tofu pickle sandwich with Thai-inspired flavors'}`
-
-
-[]I apologize, but I couldn't find any vegan recipes matching your query. Can I help you with anything else?
-
-> Finished chain.
-
-
-
"I apologize, but I couldn't find any vegan recipes matching your query. Can I help you with anything else?"
-
-
-
-
agent_executor.run("Which ingredients that I provided go the best together in dishes?" )
-
-
NameError: name 'agent_executor' is not defined
-
-
-
-source
-
-
-ConversationBot
-
- ConversationBot (verbose=True)
-
-Initialize self. See help(type(self)) for accurate signature.
-
-
os.listdir(SAMPLE_IMG_DIR)
- SAMPLE_IMG_DIR
-
-
Path('/home/evylz/AnimalEquality/lv-recipe-chatbot/assets/images/vegan_ingredients')
-
-
-
-
-
-
CPU times: user 6.19 s, sys: 1.47 s, total: 7.66 s
-Wall time: 4.68 s
-
-
-
-
-
-
I uploaded an image that may contain vegan ingredients.
-The description of the image is: `a refrigerator with food inside`.
-The extracted ingredients are:
-```
-cabbage lettuce onion
-apples
-rice
-plant-based milk
-```
-
-CPU times: user 56.7 s, sys: 63.6 ms, total: 56.8 s
-Wall time: 5.95 s
-
-
-
-source
-
-
-create_demo
-
- create_demo (bot=<class '__main__.ConversationBot'>)
-
-
-
if "demo" in globals ():
- demo.close()
- demo = create_demo(bot)
- demo.launch()
-
-
Closing server running on port: 7860
-Running on local URL: http://127.0.0.1:7860
-
-To create a public link, set `share=True` in `launch()`.
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
\ No newline at end of file
diff --git a/spaces/Aniquel/WizApp/app.py b/spaces/Aniquel/WizApp/app.py
deleted file mode 100644
index a4e1e7d0035f3d1c5ebde5610ddf80d847b7a1fd..0000000000000000000000000000000000000000
--- a/spaces/Aniquel/WizApp/app.py
+++ /dev/null
@@ -1,3 +0,0 @@
-import gradio as gr
-
-gr.Interface.load("spaces/eugenesiow/remove-bg").launch()
diff --git a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/engine/test.py b/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/engine/test.py
deleted file mode 100644
index 8dbeef271db634ec2dadfda3bc0b5ef9c7a677ff..0000000000000000000000000000000000000000
--- a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/engine/test.py
+++ /dev/null
@@ -1,202 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import os.path as osp
-import pickle
-import shutil
-import tempfile
-import time
-
-import torch
-import torch.distributed as dist
-
-import annotator.uniformer.mmcv as mmcv
-from annotator.uniformer.mmcv.runner import get_dist_info
-
-
-def single_gpu_test(model, data_loader):
- """Test model with a single gpu.
-
- This method tests model with a single gpu and displays test progress bar.
-
- Args:
- model (nn.Module): Model to be tested.
- data_loader (nn.Dataloader): Pytorch data loader.
-
- Returns:
- list: The prediction results.
- """
- model.eval()
- results = []
- dataset = data_loader.dataset
- prog_bar = mmcv.ProgressBar(len(dataset))
- for data in data_loader:
- with torch.no_grad():
- result = model(return_loss=False, **data)
- results.extend(result)
-
- # Assume result has the same length of batch_size
- # refer to https://github.com/open-mmlab/mmcv/issues/985
- batch_size = len(result)
- for _ in range(batch_size):
- prog_bar.update()
- return results
-
-
-def multi_gpu_test(model, data_loader, tmpdir=None, gpu_collect=False):
- """Test model with multiple gpus.
-
- This method tests model with multiple gpus and collects the results
- under two different modes: gpu and cpu modes. By setting
- ``gpu_collect=True``, it encodes results to gpu tensors and use gpu
- communication for results collection. On cpu mode it saves the results on
- different gpus to ``tmpdir`` and collects them by the rank 0 worker.
-
- Args:
- model (nn.Module): Model to be tested.
- data_loader (nn.Dataloader): Pytorch data loader.
- tmpdir (str): Path of directory to save the temporary results from
- different gpus under cpu mode.
- gpu_collect (bool): Option to use either gpu or cpu to collect results.
-
- Returns:
- list: The prediction results.
- """
- model.eval()
- results = []
- dataset = data_loader.dataset
- rank, world_size = get_dist_info()
- if rank == 0:
- prog_bar = mmcv.ProgressBar(len(dataset))
- time.sleep(2) # This line can prevent deadlock problem in some cases.
- for i, data in enumerate(data_loader):
- with torch.no_grad():
- result = model(return_loss=False, **data)
- results.extend(result)
-
- if rank == 0:
- batch_size = len(result)
- batch_size_all = batch_size * world_size
- if batch_size_all + prog_bar.completed > len(dataset):
- batch_size_all = len(dataset) - prog_bar.completed
- for _ in range(batch_size_all):
- prog_bar.update()
-
- # collect results from all ranks
- if gpu_collect:
- results = collect_results_gpu(results, len(dataset))
- else:
- results = collect_results_cpu(results, len(dataset), tmpdir)
- return results
-
-
-def collect_results_cpu(result_part, size, tmpdir=None):
- """Collect results under cpu mode.
-
- On cpu mode, this function will save the results on different gpus to
- ``tmpdir`` and collect them by the rank 0 worker.
-
- Args:
- result_part (list): Result list containing result parts
- to be collected.
- size (int): Size of the results, commonly equal to length of
- the results.
- tmpdir (str | None): temporal directory for collected results to
- store. If set to None, it will create a random temporal directory
- for it.
-
- Returns:
- list: The collected results.
- """
- rank, world_size = get_dist_info()
- # create a tmp dir if it is not specified
- if tmpdir is None:
- MAX_LEN = 512
- # 32 is whitespace
- dir_tensor = torch.full((MAX_LEN, ),
- 32,
- dtype=torch.uint8,
- device='cuda')
- if rank == 0:
- mmcv.mkdir_or_exist('.dist_test')
- tmpdir = tempfile.mkdtemp(dir='.dist_test')
- tmpdir = torch.tensor(
- bytearray(tmpdir.encode()), dtype=torch.uint8, device='cuda')
- dir_tensor[:len(tmpdir)] = tmpdir
- dist.broadcast(dir_tensor, 0)
- tmpdir = dir_tensor.cpu().numpy().tobytes().decode().rstrip()
- else:
- mmcv.mkdir_or_exist(tmpdir)
- # dump the part result to the dir
- mmcv.dump(result_part, osp.join(tmpdir, f'part_{rank}.pkl'))
- dist.barrier()
- # collect all parts
- if rank != 0:
- return None
- else:
- # load results of all parts from tmp dir
- part_list = []
- for i in range(world_size):
- part_file = osp.join(tmpdir, f'part_{i}.pkl')
- part_result = mmcv.load(part_file)
- # When data is severely insufficient, an empty part_result
- # on a certain gpu could makes the overall outputs empty.
- if part_result:
- part_list.append(part_result)
- # sort the results
- ordered_results = []
- for res in zip(*part_list):
- ordered_results.extend(list(res))
- # the dataloader may pad some samples
- ordered_results = ordered_results[:size]
- # remove tmp dir
- shutil.rmtree(tmpdir)
- return ordered_results
-
-
-def collect_results_gpu(result_part, size):
- """Collect results under gpu mode.
-
- On gpu mode, this function will encode results to gpu tensors and use gpu
- communication for results collection.
-
- Args:
- result_part (list): Result list containing result parts
- to be collected.
- size (int): Size of the results, commonly equal to length of
- the results.
-
- Returns:
- list: The collected results.
- """
- rank, world_size = get_dist_info()
- # dump result part to tensor with pickle
- part_tensor = torch.tensor(
- bytearray(pickle.dumps(result_part)), dtype=torch.uint8, device='cuda')
- # gather all result part tensor shape
- shape_tensor = torch.tensor(part_tensor.shape, device='cuda')
- shape_list = [shape_tensor.clone() for _ in range(world_size)]
- dist.all_gather(shape_list, shape_tensor)
- # padding result part tensor to max length
- shape_max = torch.tensor(shape_list).max()
- part_send = torch.zeros(shape_max, dtype=torch.uint8, device='cuda')
- part_send[:shape_tensor[0]] = part_tensor
- part_recv_list = [
- part_tensor.new_zeros(shape_max) for _ in range(world_size)
- ]
- # gather all result part
- dist.all_gather(part_recv_list, part_send)
-
- if rank == 0:
- part_list = []
- for recv, shape in zip(part_recv_list, shape_list):
- part_result = pickle.loads(recv[:shape[0]].cpu().numpy().tobytes())
- # When data is severely insufficient, an empty part_result
- # on a certain gpu could makes the overall outputs empty.
- if part_result:
- part_list.append(part_result)
- # sort the results
- ordered_results = []
- for res in zip(*part_list):
- ordered_results.extend(list(res))
- # the dataloader may pad some samples
- ordered_results = ordered_results[:size]
- return ordered_results
diff --git a/spaces/Artgor/digit-draw-detect/.github/README.md b/spaces/Artgor/digit-draw-detect/.github/README.md
deleted file mode 100644
index 639895967b99ff5978926f63e06caf06876198a5..0000000000000000000000000000000000000000
--- a/spaces/Artgor/digit-draw-detect/.github/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
-
-[](https://deepsource.io/gh/Erlemar/digit-draw-detect/?ref=repository-badge )
-
-This is a repo of my "Handwritten digit detector" pet-project. It uses a YOLOv3 model trained from scratch and Streamlit for frontent. You can see the live version of the app [here](https://huggingface.co/spaces/Artgor/digit-draw-detect).
-
-If you are interested in reading more about this project, here are some links:
-* [Project page on my personal website](https://andlukyane.com/project/drawn-digits-prediction)
-* [A dataset with the digits and bounding boxes on Kaggle](https://www.kaggle.com/datasets/artgor/handwritten-digits-and-bounding-boxes)
-* [Training code](https://github.com/Erlemar/pytorch_tempest_pet_)
-* [Blogpost on my personal website](https://andlukyane.com/blog/a-third-life-of-a-personal-project)
-* [Blogpost on medium](https://towardsdatascience.com/the-third-life-of-a-personal-pet-project-for-handwritten-digit-recognition-fd908dc8e7a1)
-* [Russian blogpost on habr](https://habr.com/ru/company/ods/blog/707046/)
-* [W&B report](https://wandb.ai/al-3002-w/pet_project_object_detection/reports/Training-a-model-for-Handwritten-Object-Detection---VmlldzozMTgwMzA2?accessToken=yi6t4sz6iwr1yp78nfpvw71qao5wibak30np9tfft885tdj26g3tk91h1sie3h5m)
diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/urllib3/util/proxy.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/urllib3/util/proxy.py
deleted file mode 100644
index 2199cc7b7f004009493d032720c36d6568f9d89e..0000000000000000000000000000000000000000
--- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/urllib3/util/proxy.py
+++ /dev/null
@@ -1,57 +0,0 @@
-from .ssl_ import create_urllib3_context, resolve_cert_reqs, resolve_ssl_version
-
-
-def connection_requires_http_tunnel(
- proxy_url=None, proxy_config=None, destination_scheme=None
-):
- """
- Returns True if the connection requires an HTTP CONNECT through the proxy.
-
- :param URL proxy_url:
- URL of the proxy.
- :param ProxyConfig proxy_config:
- Proxy configuration from poolmanager.py
- :param str destination_scheme:
- The scheme of the destination. (i.e https, http, etc)
- """
- # If we're not using a proxy, no way to use a tunnel.
- if proxy_url is None:
- return False
-
- # HTTP destinations never require tunneling, we always forward.
- if destination_scheme == "http":
- return False
-
- # Support for forwarding with HTTPS proxies and HTTPS destinations.
- if (
- proxy_url.scheme == "https"
- and proxy_config
- and proxy_config.use_forwarding_for_https
- ):
- return False
-
- # Otherwise always use a tunnel.
- return True
-
-
-def create_proxy_ssl_context(
- ssl_version, cert_reqs, ca_certs=None, ca_cert_dir=None, ca_cert_data=None
-):
- """
- Generates a default proxy ssl context if one hasn't been provided by the
- user.
- """
- ssl_context = create_urllib3_context(
- ssl_version=resolve_ssl_version(ssl_version),
- cert_reqs=resolve_cert_reqs(cert_reqs),
- )
-
- if (
- not ca_certs
- and not ca_cert_dir
- and not ca_cert_data
- and hasattr(ssl_context, "load_default_certs")
- ):
- ssl_context.load_default_certs()
-
- return ssl_context
diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_vendor/pyparsing/__init__.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_vendor/pyparsing/__init__.py
deleted file mode 100644
index 7802ff158d83eb88e6dbe78d9cd33ca14341662a..0000000000000000000000000000000000000000
--- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_vendor/pyparsing/__init__.py
+++ /dev/null
@@ -1,331 +0,0 @@
-# module pyparsing.py
-#
-# Copyright (c) 2003-2022 Paul T. McGuire
-#
-# Permission is hereby granted, free of charge, to any person obtaining
-# a copy of this software and associated documentation files (the
-# "Software"), to deal in the Software without restriction, including
-# without limitation the rights to use, copy, modify, merge, publish,
-# distribute, sublicense, and/or sell copies of the Software, and to
-# permit persons to whom the Software is furnished to do so, subject to
-# the following conditions:
-#
-# The above copyright notice and this permission notice shall be
-# included in all copies or substantial portions of the Software.
-#
-# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
-# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
-# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
-# IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
-# CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
-# TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
-# SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
-#
-
-__doc__ = """
-pyparsing module - Classes and methods to define and execute parsing grammars
-=============================================================================
-
-The pyparsing module is an alternative approach to creating and
-executing simple grammars, vs. the traditional lex/yacc approach, or the
-use of regular expressions. With pyparsing, you don't need to learn
-a new syntax for defining grammars or matching expressions - the parsing
-module provides a library of classes that you use to construct the
-grammar directly in Python.
-
-Here is a program to parse "Hello, World!" (or any greeting of the form
-``", !"``), built up using :class:`Word`,
-:class:`Literal`, and :class:`And` elements
-(the :meth:`'+'` operators create :class:`And` expressions,
-and the strings are auto-converted to :class:`Literal` expressions)::
-
- from pyparsing import Word, alphas
-
- # define grammar of a greeting
- greet = Word(alphas) + "," + Word(alphas) + "!"
-
- hello = "Hello, World!"
- print(hello, "->", greet.parse_string(hello))
-
-The program outputs the following::
-
- Hello, World! -> ['Hello', ',', 'World', '!']
-
-The Python representation of the grammar is quite readable, owing to the
-self-explanatory class names, and the use of :class:`'+'`,
-:class:`'|'`, :class:`'^'` and :class:`'&'` operators.
-
-The :class:`ParseResults` object returned from
-:class:`ParserElement.parseString` can be
-accessed as a nested list, a dictionary, or an object with named
-attributes.
-
-The pyparsing module handles some of the problems that are typically
-vexing when writing text parsers:
-
- - extra or missing whitespace (the above program will also handle
- "Hello,World!", "Hello , World !", etc.)
- - quoted strings
- - embedded comments
-
-
-Getting Started -
------------------
-Visit the classes :class:`ParserElement` and :class:`ParseResults` to
-see the base classes that most other pyparsing
-classes inherit from. Use the docstrings for examples of how to:
-
- - construct literal match expressions from :class:`Literal` and
- :class:`CaselessLiteral` classes
- - construct character word-group expressions using the :class:`Word`
- class
- - see how to create repetitive expressions using :class:`ZeroOrMore`
- and :class:`OneOrMore` classes
- - use :class:`'+'`, :class:`'|'`, :class:`'^'`,
- and :class:`'&'` operators to combine simple expressions into
- more complex ones
- - associate names with your parsed results using
- :class:`ParserElement.setResultsName`
- - access the parsed data, which is returned as a :class:`ParseResults`
- object
- - find some helpful expression short-cuts like :class:`delimitedList`
- and :class:`oneOf`
- - find more useful common expressions in the :class:`pyparsing_common`
- namespace class
-"""
-from typing import NamedTuple
-
-
-class version_info(NamedTuple):
- major: int
- minor: int
- micro: int
- releaselevel: str
- serial: int
-
- @property
- def __version__(self):
- return (
- "{}.{}.{}".format(self.major, self.minor, self.micro)
- + (
- "{}{}{}".format(
- "r" if self.releaselevel[0] == "c" else "",
- self.releaselevel[0],
- self.serial,
- ),
- "",
- )[self.releaselevel == "final"]
- )
-
- def __str__(self):
- return "{} {} / {}".format(__name__, self.__version__, __version_time__)
-
- def __repr__(self):
- return "{}.{}({})".format(
- __name__,
- type(self).__name__,
- ", ".join("{}={!r}".format(*nv) for nv in zip(self._fields, self)),
- )
-
-
-__version_info__ = version_info(3, 0, 9, "final", 0)
-__version_time__ = "05 May 2022 07:02 UTC"
-__version__ = __version_info__.__version__
-__versionTime__ = __version_time__
-__author__ = "Paul McGuire "
-
-from .util import *
-from .exceptions import *
-from .actions import *
-from .core import __diag__, __compat__
-from .results import *
-from .core import *
-from .core import _builtin_exprs as core_builtin_exprs
-from .helpers import *
-from .helpers import _builtin_exprs as helper_builtin_exprs
-
-from .unicode import unicode_set, UnicodeRangeList, pyparsing_unicode as unicode
-from .testing import pyparsing_test as testing
-from .common import (
- pyparsing_common as common,
- _builtin_exprs as common_builtin_exprs,
-)
-
-# define backward compat synonyms
-if "pyparsing_unicode" not in globals():
- pyparsing_unicode = unicode
-if "pyparsing_common" not in globals():
- pyparsing_common = common
-if "pyparsing_test" not in globals():
- pyparsing_test = testing
-
-core_builtin_exprs += common_builtin_exprs + helper_builtin_exprs
-
-
-__all__ = [
- "__version__",
- "__version_time__",
- "__author__",
- "__compat__",
- "__diag__",
- "And",
- "AtLineStart",
- "AtStringStart",
- "CaselessKeyword",
- "CaselessLiteral",
- "CharsNotIn",
- "Combine",
- "Dict",
- "Each",
- "Empty",
- "FollowedBy",
- "Forward",
- "GoToColumn",
- "Group",
- "IndentedBlock",
- "Keyword",
- "LineEnd",
- "LineStart",
- "Literal",
- "Located",
- "PrecededBy",
- "MatchFirst",
- "NoMatch",
- "NotAny",
- "OneOrMore",
- "OnlyOnce",
- "OpAssoc",
- "Opt",
- "Optional",
- "Or",
- "ParseBaseException",
- "ParseElementEnhance",
- "ParseException",
- "ParseExpression",
- "ParseFatalException",
- "ParseResults",
- "ParseSyntaxException",
- "ParserElement",
- "PositionToken",
- "QuotedString",
- "RecursiveGrammarException",
- "Regex",
- "SkipTo",
- "StringEnd",
- "StringStart",
- "Suppress",
- "Token",
- "TokenConverter",
- "White",
- "Word",
- "WordEnd",
- "WordStart",
- "ZeroOrMore",
- "Char",
- "alphanums",
- "alphas",
- "alphas8bit",
- "any_close_tag",
- "any_open_tag",
- "c_style_comment",
- "col",
- "common_html_entity",
- "counted_array",
- "cpp_style_comment",
- "dbl_quoted_string",
- "dbl_slash_comment",
- "delimited_list",
- "dict_of",
- "empty",
- "hexnums",
- "html_comment",
- "identchars",
- "identbodychars",
- "java_style_comment",
- "line",
- "line_end",
- "line_start",
- "lineno",
- "make_html_tags",
- "make_xml_tags",
- "match_only_at_col",
- "match_previous_expr",
- "match_previous_literal",
- "nested_expr",
- "null_debug_action",
- "nums",
- "one_of",
- "printables",
- "punc8bit",
- "python_style_comment",
- "quoted_string",
- "remove_quotes",
- "replace_with",
- "replace_html_entity",
- "rest_of_line",
- "sgl_quoted_string",
- "srange",
- "string_end",
- "string_start",
- "trace_parse_action",
- "unicode_string",
- "with_attribute",
- "indentedBlock",
- "original_text_for",
- "ungroup",
- "infix_notation",
- "locatedExpr",
- "with_class",
- "CloseMatch",
- "token_map",
- "pyparsing_common",
- "pyparsing_unicode",
- "unicode_set",
- "condition_as_parse_action",
- "pyparsing_test",
- # pre-PEP8 compatibility names
- "__versionTime__",
- "anyCloseTag",
- "anyOpenTag",
- "cStyleComment",
- "commonHTMLEntity",
- "countedArray",
- "cppStyleComment",
- "dblQuotedString",
- "dblSlashComment",
- "delimitedList",
- "dictOf",
- "htmlComment",
- "javaStyleComment",
- "lineEnd",
- "lineStart",
- "makeHTMLTags",
- "makeXMLTags",
- "matchOnlyAtCol",
- "matchPreviousExpr",
- "matchPreviousLiteral",
- "nestedExpr",
- "nullDebugAction",
- "oneOf",
- "opAssoc",
- "pythonStyleComment",
- "quotedString",
- "removeQuotes",
- "replaceHTMLEntity",
- "replaceWith",
- "restOfLine",
- "sglQuotedString",
- "stringEnd",
- "stringStart",
- "traceParseAction",
- "unicodeString",
- "withAttribute",
- "indentedBlock",
- "originalTextFor",
- "infixNotation",
- "locatedExpr",
- "withClass",
- "tokenMap",
- "conditionAsParseAction",
- "autoname_elements",
-]
diff --git a/spaces/Benson/text-generation/Examples/Arrow Fest Apk.md b/spaces/Benson/text-generation/Examples/Arrow Fest Apk.md
deleted file mode 100644
index 6ae63ba194d82e9302d11ca199da802c92dc654c..0000000000000000000000000000000000000000
--- a/spaces/Benson/text-generation/Examples/Arrow Fest Apk.md
+++ /dev/null
@@ -1,47 +0,0 @@
-
-Arrow Fest APK: Un juego de acción divertido y adictivo para Android
-Si usted está buscando un nuevo y emocionante juego de acción para jugar en su dispositivo Android, es posible que desee echa un vistazo a Arrow Fest APK. Este es un juego donde tienes que controlar tus flechas, elegir las mejores puertas, y destruir a todos en su camino. Usted puede recoger un montón de monedas y actualizar sus flechas y los ingresos, así como hacer frente a diferentes enemigos y gigantes. En este artículo, le diremos más sobre lo que es Arrow Fest APK, cómo jugarlo, qué características tiene, y cómo descargar e instalar en su dispositivo.
-arrow fest apk Download File ✪✪✪ https://bltlly.com/2v6JxZ
-¿Qué es Arrow Fest APK?
-Arrow Fest APK es un juego de acción desarrollado por Rollic Games, un popular estudio de juegos que ha creado muchos otros juegos de éxito como Go Knots 3D, Tangle Master 3D, High Heels! , y más. Arrow Fest APK es uno de sus últimos juegos, que fue lanzado en mayo de 2023. Ya ha ganado más de 10 millones de descargas y una calificación de 3,6 estrellas en Google Play Store. También está disponible en otras plataformas como APKCombo .
-El juego de Arrow Fest APK
-El juego de Arrow Fest APK es simple y adictivo. Tienes que deslizar en la pantalla para controlar las flechas, que se multiplican a medida que pasas por las puertas. Tienes que elegir las mejores puertas que te darán más flechas, evitando las que las reducirán. También tienes que apuntar y disparar a los enemigos y gigantes que intentarán detenerte. Puedes matarlos con un solo golpe si tienes suficientes flechas, pero si te quedas sin flechas, perderás el juego. También puedes recoger monedas en el camino, que puedes usar para actualizar tus flechas e ingresos.
-Las características de Arrow Fest APK
-Arrow Fest APK tiene muchas características que lo hacen divertido y agradable de jugar. Aquí están algunos de ellos:
-Controles simples e intuitivos
-
-Muchos niveles únicos para jugar
-Arrow Fest APK tiene un montón de niveles únicos que desafiará sus habilidades y reflejos. Cada nivel tiene diferentes diseños, puertas, enemigos y gigantes. Nunca te aburrirás mientras avanzas en el juego. Algunos niveles son fáciles y relajantes, mientras que otros son duros e intensos. Tendrás que usar tu estrategia y lógica para elegir las mejores puertas y evitar las trampas.
-Muchos enemigos y gigantes para destruir
-Arrow Fest APK tiene un montón de enemigos y gigantes que tratará de evitar que llegue al final del nivel. Vienen en diferentes formas, tamaños, colores y comportamientos. Algunos de ellos son rápidos y ágiles, mientras que otros son lentos y voluminosos. Algunos de ellos son inofensivos y pasivos, mientras que otros son agresivos y peligrosos. Tendrás que ser cuidadoso y alerta al enfrentarlos.
-
-Muchas puertas para decidir
-Arrow Fest APK tiene un montón de puertas que afectarán a sus flechas de diferentes maneras. Algunas puertas multiplicarán tus flechas, mientras que otras las dividirán. Algunas puertas cambiarán el color o la forma de sus flechas, mientras que otras cambiarán su dirección o velocidad. Algunas puertas te darán bonificaciones o potenciadores, mientras que otras te darán penalizaciones o obstáculos
. Tendrás que tomar decisiones rápidas e inteligentes al pasar por las puertas.
- Un montón de monedas para recoger y actualizar sus flechas y los ingresos
-Arrow Fest APK tiene un montón de monedas que usted puede recoger a medida que juega el juego. Puede utilizar las monedas para actualizar sus flechas y los ingresos. Puedes aumentar el número, tamaño, velocidad y potencia de tus flechas, así como la cantidad de monedas que ganes por nivel. También puedes desbloquear nuevos tipos de flechas, como flechas de fuego, flechas de hielo, flechas de relámpago y más. Actualizar tus flechas e ingresos te ayudará a superar los niveles y enemigos más difíciles.
-Cómo descargar e instalar Arrow Fest APK?
-
-Descargar el archivo APK de una fuente de confianza
-El primer paso es descargar el archivo APK de Arrow Fest APK de una fuente de confianza. Puede utilizar los enlaces que se proporcionan a continuación para descargar la última versión del juego de APKCombo o Google Play Store. Asegúrate de tener suficiente espacio de almacenamiento en tu dispositivo antes de descargar el archivo.
-Habilitar fuentes desconocidas en su dispositivo
-El siguiente paso es habilitar fuentes desconocidas en su dispositivo. Esto le permitirá instalar aplicaciones que no son de la tienda de aplicaciones oficial. Para hacer esto, vaya a la configuración del dispositivo, luego a la seguridad y luego a fuentes desconocidas. Active la opción para permitir la instalación de aplicaciones desde fuentes desconocidas. Puede ver un mensaje de advertencia, pero puede ignorarlo y proceder.
-Instalar el archivo APK y lanzar el juego
-El paso final es instalar el archivo APK y lanzar el juego. Busque el archivo APK descargado en su dispositivo, luego toque en él para iniciar el proceso de instalación. Siga las instrucciones de la pantalla para completar la instalación. Una vez hecho esto, puede encontrar el icono del juego en la pantalla de inicio o en el cajón de la aplicación. Toque en él para iniciar el juego y disfrutar de jugar Arrow Fest APK.
-Conclusión
-Arrow Fest APK es un juego de acción divertido y adictivo para dispositivos Android. Tiene controles simples e intuitivos, muchos niveles únicos, muchos enemigos y gigantes, muchas puertas y muchas monedas. Es un juego que pondrá a prueba tus habilidades y reflejos, así como entretenerte durante horas. Si desea probar este juego, se puede descargar e instalar utilizando los enlaces de abajo. Diviértete jugando Arrow Fest APK!
-Preguntas frecuentes
-Aquí hay algunas preguntas frecuentes sobre Arrow Fest APK:
-
-Es Arrow Fest APK libre para jugar?
-Sí, Arrow Fest APK es libre de jugar. Sin embargo, puede contener anuncios y compras en la aplicación que requieren dinero real.
-¿Es seguro descargar e instalar Arrow Fest APK?
-
-¿Cuáles son los requisitos mínimos para jugar Arrow Fest APK?
-Los requisitos mínimos para jugar Arrow Fest APK son Android 5.0 o superior, 100 MB de espacio de almacenamiento gratuito, y una conexión a Internet estable.
-¿Cómo puedo contactar con el desarrollador de Arrow Fest APK?
-Puede ponerse en contacto con el desarrollador de Arrow Fest APK enviando un correo electrónico a support@rollicgames.com o visitando su sitio web en https://www.rollicgames.com/.
-¿Puedo jugar Arrow Fest APK offline?
-No, no se puede jugar Arrow Fest APK offline. Necesita una conexión a Internet para jugar el juego.
- 64aa2da5cf
-
-
\ No newline at end of file
diff --git a/spaces/BridgeEight/internlm-20B-chat-w4-turbomind/install_lmdeploy.sh b/spaces/BridgeEight/internlm-20B-chat-w4-turbomind/install_lmdeploy.sh
deleted file mode 100644
index 464d57885c2a4712921676d2aae390f564182822..0000000000000000000000000000000000000000
--- a/spaces/BridgeEight/internlm-20B-chat-w4-turbomind/install_lmdeploy.sh
+++ /dev/null
@@ -1,27 +0,0 @@
-#!/bin/bash
-
-# 安装lmdeploy
-# 获取安装lmdeploy的位置下的lib文件夹路径
-lmdeploy_dir=$(pip show lmdeploy | grep Location | cut -d' ' -f2)
-lib_dir="${lmdeploy_dir}/lmdeploy/lib"
-
-# 检查lib目录是否存在
-if [ ! -d "$lib_dir" ]
-then
- echo "Lib directory does not exist at ${lib_dir}"
- exit 1
-fi
-
-# 克隆lmdeploy的仓库
-git clone https://github.com/InternLM/lmdeploy.git || exit 1
-
-# 将lib文件夹拷贝到刚刚克隆的lmdeploy下
-cp -r "$lib_dir" "lmdeploy/lmdeploy/" || exit 1
-
-pip uninstall -y lmdeploy
-
-cd lmdeploy && git checkout v0.0.10 && cd ..
-mv lmdeploy lmdeploy-backup
-mv lmdeploy-backup/lmdeploy lmdeploy
-
-echo "Script executed successfully"
diff --git a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/docs/notes/contributing.md b/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/docs/notes/contributing.md
deleted file mode 100644
index 95181235eaff1cb5cbb2dc554e8d4991b603d0e5..0000000000000000000000000000000000000000
--- a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/docs/notes/contributing.md
+++ /dev/null
@@ -1 +0,0 @@
-../../.github/CONTRIBUTING.md
\ No newline at end of file
diff --git a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/tests/__init__.py b/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/tests/__init__.py
deleted file mode 100644
index 168f9979a4623806934b0ff1102ac166704e7dec..0000000000000000000000000000000000000000
--- a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/tests/__init__.py
+++ /dev/null
@@ -1 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
diff --git a/spaces/CVPR/Dual-Key_Backdoor_Attacks/openvqa/openvqa/models/mmnasnet/nasnet.py b/spaces/CVPR/Dual-Key_Backdoor_Attacks/openvqa/openvqa/models/mmnasnet/nasnet.py
deleted file mode 100644
index a7016a901059661911496824247731c27a35a098..0000000000000000000000000000000000000000
--- a/spaces/CVPR/Dual-Key_Backdoor_Attacks/openvqa/openvqa/models/mmnasnet/nasnet.py
+++ /dev/null
@@ -1,218 +0,0 @@
-# --------------------------------------------------------
-# OpenVQA
-# Written by Zhenwei Shao https://github.com/ParadoxZW
-# --------------------------------------------------------
-
-from openvqa.ops.fc import FC, MLP
-from openvqa.ops.layer_norm import LayerNorm
-
-import torch.nn as nn
-import torch.nn.functional as F
-import torch
-import math
-
-
-# ------------------------------
-# --- Operations and Modules ---
-# ------------------------------
-
-class RelMHAtt(nn.Module):
- def __init__(self, __C):
- super(RelMHAtt, self).__init__()
- self.__C = __C
- self.HBASE = __C.REL_HBASE
- self.HHEAD = int(__C.HIDDEN_SIZE / __C.REL_HBASE)
-
- self.linear_v = nn.Linear(__C.HIDDEN_SIZE, __C.HIDDEN_SIZE)
- self.linear_k = nn.Linear(__C.HIDDEN_SIZE, __C.HIDDEN_SIZE)
- self.linear_q = nn.Linear(__C.HIDDEN_SIZE, __C.HIDDEN_SIZE)
- self.linear_merge = nn.Linear(__C.HIDDEN_SIZE, __C.HIDDEN_SIZE)
- self.linear_r = nn.Linear(__C.REL_SIZE, self.HHEAD, bias=True)
-
- self.dropout = nn.Dropout(__C.DROPOUT_R)
- self.relu = nn.ReLU(inplace=True)
-
- def forward(self, v, k, q, mask=None, rel_embed=None):
- assert rel_embed is not None
- n_batches = q.size(0)
-
- v = self.linear_v(v).view(n_batches, -1, self.HHEAD,
- self.HBASE).transpose(1, 2)
- k = self.linear_k(k).view(n_batches, -1, self.HHEAD,
- self.HBASE).transpose(1, 2)
- q = self.linear_q(q).view(n_batches, -1, self.HHEAD,
- self.HBASE).transpose(1, 2)
- r = self.relu(self.linear_r(rel_embed)).permute(0, 3, 1, 2)
-
- d_k = q.size(-1)
- scores = torch.matmul(q, k.transpose(-2, -1)) / math.sqrt(d_k)
- scores = torch.log(torch.clamp(r, min=1e-6)) + scores
- if mask is not None:
- scores = scores.masked_fill(mask, -1e9)
- att_map = F.softmax(scores, dim=-1)
- att_map = self.dropout(att_map)
- atted = torch.matmul(att_map, v)
-
- atted = atted.transpose(1, 2).contiguous().view(
- n_batches, -1, self.__C.HIDDEN_SIZE)
- atted = self.linear_merge(atted)
-
- return atted
-
-
-class MHAtt(nn.Module):
- def __init__(self, __C):
- super(MHAtt, self).__init__()
- self.__C = __C
-
- self.linear_v = nn.Linear(__C.HIDDEN_SIZE, __C.HIDDEN_SIZE)
- self.linear_k = nn.Linear(__C.HIDDEN_SIZE, __C.HIDDEN_SIZE)
- self.linear_q = nn.Linear(__C.HIDDEN_SIZE, __C.HIDDEN_SIZE)
- self.linear_merge = nn.Linear(__C.HIDDEN_SIZE, __C.HIDDEN_SIZE)
-
- self.dropout = nn.Dropout(__C.DROPOUT_R)
-
- def forward(self, v, k, q, mask):
- n_batches = q.size(0)
-
- v = self.linear_v(v).view(
- n_batches,
- -1,
- self.__C.MULTI_HEAD,
- int(self.__C.HIDDEN_SIZE / self.__C.MULTI_HEAD)
- ).transpose(1, 2)
-
- k = self.linear_k(k).view(
- n_batches,
- -1,
- self.__C.MULTI_HEAD,
- int(self.__C.HIDDEN_SIZE / self.__C.MULTI_HEAD)
- ).transpose(1, 2)
-
- q = self.linear_q(q).view(
- n_batches,
- -1,
- self.__C.MULTI_HEAD,
- int(self.__C.HIDDEN_SIZE / self.__C.MULTI_HEAD)
- ).transpose(1, 2)
-
- atted = self.att(v, k, q, mask)
- atted = atted.transpose(1, 2).contiguous().view(
- n_batches,
- -1,
- self.__C.HIDDEN_SIZE
- )
-
- atted = self.linear_merge(atted)
-
- return atted
-
- def att(self, value, key, query, mask):
- d_k = query.size(-1)
-
- scores = torch.matmul(
- query, key.transpose(-2, -1)
- ) / math.sqrt(d_k)
-
- if mask is not None:
- scores = scores.masked_fill(mask, -1e9)
-
- att_map = F.softmax(scores, dim=-1)
- att_map = self.dropout(att_map)
-
- return torch.matmul(att_map, value)
-
-
-class FFN(nn.Module):
- def __init__(self, __C):
- super(FFN, self).__init__()
-
- self.mlp = MLP(
- in_size=__C.HIDDEN_SIZE,
- mid_size=__C.HIDDEN_SIZE * 4,
- out_size=__C.HIDDEN_SIZE,
- dropout_r=__C.DROPOUT_R,
- use_relu=True
- )
-
- self.dropout = nn.Dropout(__C.DROPOUT_R)
- self.norm = LayerNorm(__C.HIDDEN_SIZE)
-
- def forward(self, x, arg1, arg2, arg3, arg4):
- x = self.norm(x + self.dropout(
- self.mlp(x)
- ))
- return x
-
-
-class SA(nn.Module):
- def __init__(self, __C, size=1024):
- super(SA, self).__init__()
-
- self.mhatt = MHAtt(__C)
-
- self.dropout = nn.Dropout(__C.DROPOUT_R)
- self.norm = LayerNorm(__C.HIDDEN_SIZE)
-
- def forward(self, y, arg1, y_mask, arg2, arg3):
- y = self.norm(y + self.dropout(
- self.mhatt(y, y, y, y_mask)
- ))
-
- return y
-
-
-class RSA(nn.Module):
- def __init__(self, __C, size=1024):
- super(RSA, self).__init__()
-
- self.mhatt = RelMHAtt(__C)
-
- self.dropout = nn.Dropout(__C.DROPOUT_R)
- self.norm = LayerNorm(__C.HIDDEN_SIZE)
-
- def forward(self, x, arg1, x_mask, arg2, rela):
- x = self.norm(x + self.dropout(
- self.mhatt(x, x, x, x_mask, rela)
- ))
-
- return x
-
-
-class GA(nn.Module):
- def __init__(self, __C):
- super(GA, self).__init__()
-
- self.mhatt = MHAtt(__C)
-
- self.dropout = nn.Dropout(__C.DROPOUT_R)
- self.norm = LayerNorm(__C.HIDDEN_SIZE)
-
- def forward(self, x, y, x_mask, y_mask, rela):
- x = self.norm(x + self.dropout(
- self.mhatt(v=y, k=y, q=x, mask=y_mask)
- ))
-
- return x
-
-
-# ------------------------------------------------
-# --- Encoder-Decoder Architecture of MMNasNet ---
-# ------------------------------------------------
-
-class NAS_ED(nn.Module):
- def __init__(self, __C):
- super(NAS_ED, self).__init__()
- enc = __C.ARCH['enc']
- dec = __C.ARCH['dec']
- self.enc_list = nn.ModuleList([eval(layer)(__C) for layer in enc])
- self.dec_list = nn.ModuleList([eval(layer)(__C) for layer in dec])
-
- def forward(self, y, x, y_mask, x_mask, rela):
- for enc in self.enc_list:
- y = enc(y, None, y_mask, None, None)
-
- for dec in self.dec_list:
- x = dec(x, y, x_mask, y_mask, rela)
-
- return y, x
diff --git a/spaces/CVPR/LIVE/thrust/thrust/detail/complex/cexp.h b/spaces/CVPR/LIVE/thrust/thrust/detail/complex/cexp.h
deleted file mode 100644
index 151df397bd6cd2839cc01f0a55db0b96a54d520c..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/thrust/thrust/detail/complex/cexp.h
+++ /dev/null
@@ -1,183 +0,0 @@
-/*
- * Copyright 2008-2013 NVIDIA Corporation
- * Copyright 2013 Filipe RNC Maia
- *
- * Licensed under the Apache License, Version 2.0 (the "License");
- * you may not use this file except in compliance with the License.
- * You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-/*-
- * Copyright (c) 2011 David Schultz
- * All rights reserved.
- *
- * Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions
- * are met:
- * 1. Redistributions of source code must retain the above copyright
- * notice, this list of conditions and the following disclaimer.
- * 2. Redistributions in binary form must reproduce the above copyright
- * notice, this list of conditions and the following disclaimer in the
- * documentation and/or other materials provided with the distribution.
- *
- * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND
- * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
- * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
- * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE
- * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
- * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
- * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
- * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
- * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
- * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
- * SUCH DAMAGE.
- */
-
-/* adapted from FreeBSD:
- * lib/msun/src/s_cexp.c
- * lib/msun/src/k_exp.c
- *
- */
-
-#pragma once
-
-#include
-#include
-
-namespace thrust{
-namespace detail{
-namespace complex{
-/*
- * Compute exp(x), scaled to avoid spurious overflow. An exponent is
- * returned separately in 'expt'.
- *
- * Input: ln(DBL_MAX) <= x < ln(2 * DBL_MAX / DBL_MIN_DENORM) ~= 1454.91
- * Output: 2**1023 <= y < 2**1024
- */
-__host__ __device__ inline
- double frexp_exp(double x, int *expt){
- const uint32_t k = 1799; /* constant for reduction */
- const double kln2 = 1246.97177782734161156; /* k * ln2 */
-
- double exp_x;
- uint32_t hx;
-
- /*
- * We use exp(x) = exp(x - kln2) * 2**k, carefully chosen to
- * minimize |exp(kln2) - 2**k|. We also scale the exponent of
- * exp_x to MAX_EXP so that the result can be multiplied by
- * a tiny number without losing accuracy due to denormalization.
- */
- exp_x = exp(x - kln2);
- get_high_word(hx, exp_x);
- *expt = (hx >> 20) - (0x3ff + 1023) + k;
- set_high_word(exp_x, (hx & 0xfffff) | ((0x3ff + 1023) << 20));
- return (exp_x);
-}
-
-
-__host__ __device__ inline
-complex ldexp_cexp(complex z, int expt){
- double x, y, exp_x, scale1, scale2;
- int ex_expt, half_expt;
-
- x = z.real();
- y = z.imag();
- exp_x = frexp_exp(x, &ex_expt);
- expt += ex_expt;
-
- /*
- * Arrange so that scale1 * scale2 == 2**expt. We use this to
- * compensate for scalbn being horrendously slow.
- */
- half_expt = expt / 2;
- insert_words(scale1, (0x3ff + half_expt) << 20, 0);
- half_expt = expt - half_expt;
- insert_words(scale2, (0x3ff + half_expt) << 20, 0);
-
- return (complex(cos(y) * exp_x * scale1 * scale2,
- sin(y) * exp_x * scale1 * scale2));
-}
-
-
-__host__ __device__ inline
-complex cexp(const complex& z){
- double x, y, exp_x;
- uint32_t hx, hy, lx, ly;
-
- const uint32_t
- exp_ovfl = 0x40862e42, /* high bits of MAX_EXP * ln2 ~= 710 */
- cexp_ovfl = 0x4096b8e4; /* (MAX_EXP - MIN_DENORM_EXP) * ln2 */
-
-
- x = z.real();
- y = z.imag();
-
- extract_words(hy, ly, y);
- hy &= 0x7fffffff;
-
- /* cexp(x + I 0) = exp(x) + I 0 */
- if ((hy | ly) == 0)
- return (complex(exp(x), y));
- extract_words(hx, lx, x);
- /* cexp(0 + I y) = cos(y) + I sin(y) */
- if (((hx & 0x7fffffff) | lx) == 0)
- return (complex(cos(y), sin(y)));
-
- if (hy >= 0x7ff00000) {
- if (lx != 0 || (hx & 0x7fffffff) != 0x7ff00000) {
- /* cexp(finite|NaN +- I Inf|NaN) = NaN + I NaN */
- return (complex(y - y, y - y));
- } else if (hx & 0x80000000) {
- /* cexp(-Inf +- I Inf|NaN) = 0 + I 0 */
- return (complex(0.0, 0.0));
- } else {
- /* cexp(+Inf +- I Inf|NaN) = Inf + I NaN */
- return (complex(x, y - y));
- }
- }
-
- if (hx >= exp_ovfl && hx <= cexp_ovfl) {
- /*
- * x is between 709.7 and 1454.3, so we must scale to avoid
- * overflow in exp(x).
- */
- return (ldexp_cexp(z, 0));
- } else {
- /*
- * Cases covered here:
- * - x < exp_ovfl and exp(x) won't overflow (common case)
- * - x > cexp_ovfl, so exp(x) * s overflows for all s > 0
- * - x = +-Inf (generated by exp())
- * - x = NaN (spurious inexact exception from y)
- */
- exp_x = std::exp(x);
- return (complex(exp_x * cos(y), exp_x * sin(y)));
- }
-}
-
-} // namespace complex
-
-} // namespace detail
-
-template
-__host__ __device__
-inline complex exp(const complex& z){
- return polar(std::exp(z.real()),z.imag());
-}
-
-template <>
-__host__ __device__
-inline complex exp(const complex& z){
- return detail::complex::cexp(z);
-}
-
-} // namespace thrust
diff --git a/spaces/CVPR/LIVE/thrust/thrust/system/cuda/detail/internal/copy_device_to_device.h b/spaces/CVPR/LIVE/thrust/thrust/system/cuda/detail/internal/copy_device_to_device.h
deleted file mode 100644
index 7a6631d90321bf52c5441aacfe86c7cf6ea71a5b..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/thrust/thrust/system/cuda/detail/internal/copy_device_to_device.h
+++ /dev/null
@@ -1,64 +0,0 @@
-
-/******************************************************************************
- * Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved.
- *
- * Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions are met:
- * * Redistributions of source code must retain the above copyright
- * notice, this list of conditions and the following disclaimer.
- * * Redistributions in binary form must reproduce the above copyright
- * notice, this list of conditions and the following disclaimer in the
- * documentation and/or other materials provided with the distribution.
- * * Neither the name of the NVIDIA CORPORATION nor the
- * names of its contributors may be used to endorse or promote products
- * derived from this software without specific prior written permission.
- *
- * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
- * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
- * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
- * ARE DISCLAIMED. IN NO EVENT SHALL NVIDIA CORPORATION BE LIABLE FOR ANY
- * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
- * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
- * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
- * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
- * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
- *
- ******************************************************************************/
-#pragma once
-
-
-#if THRUST_DEVICE_COMPILER == THRUST_DEVICE_COMPILER_NVCC
-#include
-#include
-#include
-#include
-
-namespace thrust
-{
-namespace cuda_cub {
-
-namespace __copy {
-
- template
- OutputIt THRUST_RUNTIME_FUNCTION
- device_to_device(execution_policy& policy,
- InputIt first,
- InputIt last,
- OutputIt result)
- {
- typedef typename thrust::iterator_traits::value_type InputTy;
- return cuda_cub::transform(policy,
- first,
- last,
- result,
- thrust::identity());
- }
-
-} // namespace __copy
-
-} // namespace cuda_cub
-} // end namespace thrust
-#endif
diff --git a/spaces/CVPR/LIVE/thrust/thrust/system/error_code.h b/spaces/CVPR/LIVE/thrust/thrust/system/error_code.h
deleted file mode 100644
index faa81bbca38c5fc8c6d0fa1dc8e803b66db02568..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/thrust/thrust/system/error_code.h
+++ /dev/null
@@ -1,523 +0,0 @@
-/*
- * Copyright 2008-2013 NVIDIA Corporation
- *
- * Licensed under the Apache License, Version 2.0 (the "License");
- * you may not use this file except in compliance with the License.
- * You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-
-/*! \file error_code.h
- * \brief An object used to hold error values, such as those originating from the
- * operating system or other low-level application program interfaces.
- */
-
-#pragma once
-
-#include
-#include
-#include
-#include
-
-namespace thrust
-{
-
-namespace system
-{
-
-
-/*! \addtogroup system_diagnostics
- * \{
- */
-
-class error_condition;
-class error_code;
-
-/*! A metafunction returning whether or not the parameter is an \p error_code enum.
- */
-template struct is_error_code_enum : public thrust::detail::false_type {};
-
-/*! A metafunction returning whether or not the parameter is an \p error_condition enum.
- */
-template struct is_error_condition_enum : public thrust::detail::false_type {};
-
-
-// XXX N3092 prefers enum class errc { ... }
-namespace errc
-{
-
-/*! An enum containing common error codes.
- */
-enum errc_t
-{
- address_family_not_supported = detail::eafnosupport,
- address_in_use = detail::eaddrinuse,
- address_not_available = detail::eaddrnotavail,
- already_connected = detail::eisconn,
- argument_list_too_long = detail::e2big,
- argument_out_of_domain = detail::edom,
- bad_address = detail::efault,
- bad_file_descriptor = detail::ebadf,
- bad_message = detail::ebadmsg,
- broken_pipe = detail::epipe,
- connection_aborted = detail::econnaborted,
- connection_already_in_progress = detail::ealready,
- connection_refused = detail::econnrefused,
- connection_reset = detail::econnreset,
- cross_device_link = detail::exdev,
- destination_address_required = detail::edestaddrreq,
- device_or_resource_busy = detail::ebusy,
- directory_not_empty = detail::enotempty,
- executable_format_error = detail::enoexec,
- file_exists = detail::eexist,
- file_too_large = detail::efbig,
- filename_too_long = detail::enametoolong,
- function_not_supported = detail::enosys,
- host_unreachable = detail::ehostunreach,
- identifier_removed = detail::eidrm,
- illegal_byte_sequence = detail::eilseq,
- inappropriate_io_control_operation = detail::enotty,
- interrupted = detail::eintr,
- invalid_argument = detail::einval,
- invalid_seek = detail::espipe,
- io_error = detail::eio,
- is_a_directory = detail::eisdir,
- message_size = detail::emsgsize,
- network_down = detail::enetdown,
- network_reset = detail::enetreset,
- network_unreachable = detail::enetunreach,
- no_buffer_space = detail::enobufs,
- no_child_process = detail::echild,
- no_link = detail::enolink,
- no_lock_available = detail::enolck,
- no_message_available = detail::enodata,
- no_message = detail::enomsg,
- no_protocol_option = detail::enoprotoopt,
- no_space_on_device = detail::enospc,
- no_stream_resources = detail::enosr,
- no_such_device_or_address = detail::enxio,
- no_such_device = detail::enodev,
- no_such_file_or_directory = detail::enoent,
- no_such_process = detail::esrch,
- not_a_directory = detail::enotdir,
- not_a_socket = detail::enotsock,
- not_a_stream = detail::enostr,
- not_connected = detail::enotconn,
- not_enough_memory = detail::enomem,
- not_supported = detail::enotsup,
- operation_canceled = detail::ecanceled,
- operation_in_progress = detail::einprogress,
- operation_not_permitted = detail::eperm,
- operation_not_supported = detail::eopnotsupp,
- operation_would_block = detail::ewouldblock,
- owner_dead = detail::eownerdead,
- permission_denied = detail::eacces,
- protocol_error = detail::eproto,
- protocol_not_supported = detail::eprotonosupport,
- read_only_file_system = detail::erofs,
- resource_deadlock_would_occur = detail::edeadlk,
- resource_unavailable_try_again = detail::eagain,
- result_out_of_range = detail::erange,
- state_not_recoverable = detail::enotrecoverable,
- stream_timeout = detail::etime,
- text_file_busy = detail::etxtbsy,
- timed_out = detail::etimedout,
- too_many_files_open_in_system = detail::enfile,
- too_many_files_open = detail::emfile,
- too_many_links = detail::emlink,
- too_many_symbolic_link_levels = detail::eloop,
- value_too_large = detail::eoverflow,
- wrong_protocol_type = detail::eprototype
-}; // end errc_t
-
-} // end namespace errc
-
-
-/*! Specialization of \p is_error_condition_enum for \p errc::errc_t
- */
-template<> struct is_error_condition_enum : public thrust::detail::true_type {};
-
-
-// [19.5.1.1] class error_category
-
-/*! \brief The class \p error_category serves as a base class for types used to identify the
- * source and encoding of a particular category of error code. Classes may be derived
- * from \p error_category to support categories of errors in addition to those defined
- * in the C++ International Standard.
- */
-class error_category
-{
- public:
- /*! Destructor does nothing.
- */
- inline virtual ~error_category(void);
-
- // XXX enable upon c++0x
- // error_category(const error_category &) = delete;
- // error_category &operator=(const error_category &) = delete;
-
- /*! \return A string naming the error category.
- */
- inline virtual const char *name(void) const = 0;
-
- /*! \return \p error_condition(ev, *this).
- */
- inline virtual error_condition default_error_condition(int ev) const;
-
- /*! \return default_error_condition(code) == condition
- */
- inline virtual bool equivalent(int code, const error_condition &condition) const;
-
- /*! \return *this == code.category() && code.value() == condition
- */
- inline virtual bool equivalent(const error_code &code, int condition) const;
-
- /*! \return A string that describes the error condition denoted by \p ev.
- */
- virtual std::string message(int ev) const = 0;
-
- /*! \return *this == &rhs
- */
- inline bool operator==(const error_category &rhs) const;
-
- /*! \return !(*this == rhs)
- */
- inline bool operator!=(const error_category &rhs) const;
-
- /*! \return less()(this, &rhs)
- * \note \c less provides a total ordering for pointers.
- */
- inline bool operator<(const error_category &rhs) const;
-}; // end error_category
-
-
-// [19.5.1.5] error_category objects
-
-
-/*! \return A reference to an object of a type derived from class \p error_category.
- * \note The object's \p default_error_condition and \p equivalent virtual functions
- * shall behave as specified for the class \p error_category. The object's
- * \p name virtual function shall return a pointer to the string "generic" .
- */
-inline const error_category &generic_category(void);
-
-
-/*! \return A reference to an object of a type derived from class \p error_category.
- * \note The object's \p equivalent virtual functions shall behave as specified for
- * class \p error_category. The object's \p name virtual function shall return
- * a pointer to the string "system" . The object's \p default_error_condition
- * virtual function shall behave as follows:
- *
- * If the argument ev corresponds to a POSIX errno value
- * \c posv, the function shall return error_condition(ev,generic_category()) .
- * Otherwise, the function shall return error_condition(ev,system_category()) .
- * What constitutes correspondence for any given operating system is unspecified.
- */
-inline const error_category &system_category(void);
-
-
-// [19.5.2] Class error_code
-
-
-/*! \brief The class \p error_code describes an object used to hold error code values, such as
- * those originating from the operating system or other low-level application program
- * interfaces.
- */
-class error_code
-{
- public:
- // [19.5.2.2] constructors:
-
- /*! Effects: Constructs an object of type \p error_code.
- * \post value() == 0 and category() == &system_category() .
- */
- inline error_code(void);
-
- /*! Effects: Constructs an object of type \p error_code.
- * \post value() == val and category() == &cat .
- */
- inline error_code(int val, const error_category &cat);
-
- /*! Effects: Constructs an object of type \p error_code.
- * \post *this == make_error_code(e) .
- */
- template
- error_code(ErrorCodeEnum e
-// XXX WAR msvc's problem with enable_if
-#if THRUST_HOST_COMPILER != THRUST_HOST_COMPILER_MSVC
- , typename thrust::detail::enable_if::value>::type * = 0
-#endif // THRUST_HOST_COMPILER_MSVC
- );
-
- // [19.5.2.3] modifiers:
-
- /*! \post value() == val and category() == &cat .
- */
- inline void assign(int val, const error_category &cat);
-
- /*! \post *this == make_error_code(e) .
- */
- template
-// XXX WAR msvc's problem with enable_if
-#if THRUST_HOST_COMPILER != THRUST_HOST_COMPILER_MSVC
- typename thrust::detail::enable_if::value, error_code>::type &
-#else
- error_code &
-#endif // THRUST_HOST_COMPILER_MSVC
- operator=(ErrorCodeEnum e);
-
- /*! \post value() == 0 and category() == system_category() .
- */
- inline void clear(void);
-
- // [19.5.2.4] observers:
-
- /*! \return An integral value of this \p error_code object.
- */
- inline int value(void) const;
-
- /*! \return An \p error_category describing the category of this \p error_code object.
- */
- inline const error_category &category(void) const;
-
- /*! \return category().default_error_condition() .
- */
- inline error_condition default_error_condition(void) const;
-
- /*! \return category().message(value()) .
- */
- inline std::string message(void) const;
-
- // XXX replace the below upon c++0x
- // inline explicit operator bool (void) const;
-
- /*! \return value() != 0 .
- */
- inline operator bool (void) const;
-
- /*! \cond
- */
- private:
- int m_val;
- const error_category *m_cat;
- /*! \endcond
- */
-}; // end error_code
-
-
-// [19.5.2.5] Class error_code non-member functions
-
-
-// XXX replace errc::errc_t with errc upon c++0x
-/*! \return error_code(static_cast(e), generic_category())
- */
-inline error_code make_error_code(errc::errc_t e);
-
-
-/*! \return lhs.category() < rhs.category() || lhs.category() == rhs.category() && lhs.value() < rhs.value() .
- */
-inline bool operator<(const error_code &lhs, const error_code &rhs);
-
-
-/*! Effects: os << ec.category().name() << ':' << ec.value() .
- */
-template
- std::basic_ostream&
- operator<<(std::basic_ostream& os, const error_code &ec);
-
-
-// [19.5.3] class error_condition
-
-
-/*! \brief The class \p error_condition describes an object used to hold values identifying
- * error conditions.
- *
- * \note \p error_condition values are portable abstractions, while \p error_code values
- * are implementation specific.
- */
-class error_condition
-{
- public:
- // [19.5.3.2] constructors
-
- /*! Constructs an object of type \p error_condition.
- * \post value() == 0 .
- * \post category() == generic_category() .
- */
- inline error_condition(void);
-
- /*! Constructs an object of type \p error_condition.
- * \post value() == val .
- * \post category() == cat .
- */
- inline error_condition(int val, const error_category &cat);
-
- /*! Constructs an object of type \p error_condition.
- * \post *this == make_error_condition(e) .
- * \note This constructor shall not participate in overload resolution unless
- * is_error_condition_enum::value is true .
- */
- template
- error_condition(ErrorConditionEnum e
-// XXX WAR msvc's problem with enable_if
-#if THRUST_HOST_COMPILER != THRUST_HOST_COMPILER_MSVC
- , typename thrust::detail::enable_if::value>::type * = 0
-#endif // THRUST_HOST_COMPILER != THRUST_HOST_COMPILER_MSVC
- );
-
- // [19.5.3.3] modifiers
-
- /*! Assigns to this \p error_code object from an error value and an \p error_category.
- * \param val The new value to return from value() .
- * \param cat The new \p error_category to return from category() .
- * \post value() == val .
- * \post category() == cat .
- */
- inline void assign(int val, const error_category &cat);
-
- /*! Assigns to this \p error_code object from an error condition enumeration.
- * \return *this
- * \post *this == make_error_condition(e) .
- * \note This operator shall not participate in overload resolution unless
- * is_error_condition_enum::value is true .
- */
- template
-// XXX WAR msvc's problem with enable_if
-#if THRUST_HOST_COMPILER != THRUST_HOST_COMPILER_MSVC
- typename thrust::detail::enable_if::value, error_condition>::type &
-#else
- error_condition &
-#endif // THRUST_HOST_COMPILER != THRUST_HOST_COMPILER_MSVC
- operator=(ErrorConditionEnum e);
-
- /*! Clears this \p error_code object.
- * \post value == 0
- * \post category() == generic_category() .
- */
- inline void clear(void);
-
- // [19.5.3.4] observers
-
- /*! \return The value encoded by this \p error_condition.
- */
- inline int value(void) const;
-
- /*! \return A const reference to the \p error_category encoded by this \p error_condition.
- */
- inline const error_category &category(void) const;
-
- /*! \return category().message(value()) .
- */
- inline std::string message(void) const;
-
- // XXX replace below with this upon c++0x
- //explicit operator bool (void) const;
-
- /*! \return value() != 0 .
- */
- inline operator bool (void) const;
-
- /*! \cond
- */
-
- private:
- int m_val;
- const error_category *m_cat;
-
- /*! \endcond
- */
-}; // end error_condition
-
-
-
-// [19.5.3.5] Class error_condition non-member functions
-
-// XXX replace errc::errc_t with errc upon c++0x
-/*! \return error_condition(static_cast(e), generic_category()) .
- */
-inline error_condition make_error_condition(errc::errc_t e);
-
-
-/*! \return lhs.category() < rhs.category() || lhs.category() == rhs.category() && lhs.value() < rhs.value() .
- */
-inline bool operator<(const error_condition &lhs, const error_condition &rhs);
-
-
-// [19.5.4] Comparison operators
-
-
-/*! \return lhs.category() == rhs.category() && lhs.value() == rhs.value() .
- */
-inline bool operator==(const error_code &lhs, const error_code &rhs);
-
-
-/*! \return lhs.category().equivalent(lhs.value(), rhs) || rhs.category().equivalent(lhs,rhs.value()) .
- */
-inline bool operator==(const error_code &lhs, const error_condition &rhs);
-
-
-/*! \return rhs.category().equivalent(lhs.value(), lhs) || lhs.category().equivalent(rhs, lhs.value()) .
- */
-inline bool operator==(const error_condition &lhs, const error_code &rhs);
-
-
-/*! \return lhs.category() == rhs.category() && lhs.value() == rhs.value()
- */
-inline bool operator==(const error_condition &lhs, const error_condition &rhs);
-
-
-/*! \return !(lhs == rhs)
- */
-inline bool operator!=(const error_code &lhs, const error_code &rhs);
-
-
-/*! \return !(lhs == rhs)
- */
-inline bool operator!=(const error_code &lhs, const error_condition &rhs);
-
-
-/*! \return !(lhs == rhs)
- */
-inline bool operator!=(const error_condition &lhs, const error_code &rhs);
-
-
-/*! \return !(lhs == rhs)
- */
-inline bool operator!=(const error_condition &lhs, const error_condition &rhs);
-
-/*! \} // end system_diagnostics
- */
-
-
-} // end system
-
-
-// import names into thrust::
-using system::error_category;
-using system::error_code;
-using system::error_condition;
-using system::is_error_code_enum;
-using system::is_error_condition_enum;
-using system::make_error_code;
-using system::make_error_condition;
-
-// XXX replace with using system::errc upon c++0x
-namespace errc = system::errc;
-
-using system::generic_category;
-using system::system_category;
-
-} // end thrust
-
-#include
-#include
-#include
-
diff --git a/spaces/CVPR/Text2Human/Text2Human/ui/mouse_event.py b/spaces/CVPR/Text2Human/Text2Human/ui/mouse_event.py
deleted file mode 100644
index 87c5f85e0fde810bb72c0814352e30f475900d34..0000000000000000000000000000000000000000
--- a/spaces/CVPR/Text2Human/Text2Human/ui/mouse_event.py
+++ /dev/null
@@ -1,129 +0,0 @@
-# -*- coding: utf-8 -*-
-
-import numpy as np
-from PyQt5.QtCore import *
-from PyQt5.QtGui import *
-from PyQt5.QtWidgets import *
-
-color_list = [
- QColor(0, 0, 0),
- QColor(255, 250, 250),
- QColor(220, 220, 220),
- QColor(250, 235, 215),
- QColor(255, 250, 205),
- QColor(211, 211, 211),
- QColor(70, 130, 180),
- QColor(127, 255, 212),
- QColor(0, 100, 0),
- QColor(50, 205, 50),
- QColor(255, 255, 0),
- QColor(245, 222, 179),
- QColor(255, 140, 0),
- QColor(255, 0, 0),
- QColor(16, 78, 139),
- QColor(144, 238, 144),
- QColor(50, 205, 174),
- QColor(50, 155, 250),
- QColor(160, 140, 88),
- QColor(213, 140, 88),
- QColor(90, 140, 90),
- QColor(185, 210, 205),
- QColor(130, 165, 180),
- QColor(225, 141, 151)
-]
-
-
-class GraphicsScene(QGraphicsScene):
-
- def __init__(self, mode, size, parent=None):
- QGraphicsScene.__init__(self, parent)
- self.mode = mode
- self.size = size
- self.mouse_clicked = False
- self.prev_pt = None
-
- # self.masked_image = None
-
- # save the points
- self.mask_points = []
- for i in range(len(color_list)):
- self.mask_points.append([])
-
- # save the size of points
- self.size_points = []
- for i in range(len(color_list)):
- self.size_points.append([])
-
- # save the history of edit
- self.history = []
-
- def reset(self):
- # save the points
- self.mask_points = []
- for i in range(len(color_list)):
- self.mask_points.append([])
- # save the size of points
- self.size_points = []
- for i in range(len(color_list)):
- self.size_points.append([])
- # save the history of edit
- self.history = []
-
- self.mode = 0
- self.prev_pt = None
-
- def mousePressEvent(self, event):
- self.mouse_clicked = True
-
- def mouseReleaseEvent(self, event):
- self.prev_pt = None
- self.mouse_clicked = False
-
- def mouseMoveEvent(self, event): # drawing
- if self.mouse_clicked:
- if self.prev_pt:
- self.drawMask(self.prev_pt, event.scenePos(),
- color_list[self.mode], self.size)
- pts = {}
- pts['prev'] = (int(self.prev_pt.x()), int(self.prev_pt.y()))
- pts['curr'] = (int(event.scenePos().x()),
- int(event.scenePos().y()))
-
- self.size_points[self.mode].append(self.size)
- self.mask_points[self.mode].append(pts)
- self.history.append(self.mode)
- self.prev_pt = event.scenePos()
- else:
- self.prev_pt = event.scenePos()
-
- def drawMask(self, prev_pt, curr_pt, color, size):
- lineItem = QGraphicsLineItem(QLineF(prev_pt, curr_pt))
- lineItem.setPen(QPen(color, size, Qt.SolidLine)) # rect
- self.addItem(lineItem)
-
- def erase_prev_pt(self):
- self.prev_pt = None
-
- def reset_items(self):
- for i in range(len(self.items())):
- item = self.items()[0]
- self.removeItem(item)
-
- def undo(self):
- if len(self.items()) > 1:
- if len(self.items()) >= 9:
- for i in range(8):
- item = self.items()[0]
- self.removeItem(item)
- if self.history[-1] == self.mode:
- self.mask_points[self.mode].pop()
- self.size_points[self.mode].pop()
- self.history.pop()
- else:
- for i in range(len(self.items()) - 1):
- item = self.items()[0]
- self.removeItem(item)
- if self.history[-1] == self.mode:
- self.mask_points[self.mode].pop()
- self.size_points[self.mode].pop()
- self.history.pop()
diff --git a/spaces/CVPR/WALT/mmdet/models/roi_heads/roi_extractors/base_roi_extractor.py b/spaces/CVPR/WALT/mmdet/models/roi_heads/roi_extractors/base_roi_extractor.py
deleted file mode 100644
index 847932547c6c309ae38b45dc43ac0ef8ca66d347..0000000000000000000000000000000000000000
--- a/spaces/CVPR/WALT/mmdet/models/roi_heads/roi_extractors/base_roi_extractor.py
+++ /dev/null
@@ -1,83 +0,0 @@
-from abc import ABCMeta, abstractmethod
-
-import torch
-import torch.nn as nn
-from mmcv import ops
-
-
-class BaseRoIExtractor(nn.Module, metaclass=ABCMeta):
- """Base class for RoI extractor.
-
- Args:
- roi_layer (dict): Specify RoI layer type and arguments.
- out_channels (int): Output channels of RoI layers.
- featmap_strides (List[int]): Strides of input feature maps.
- """
-
- def __init__(self, roi_layer, out_channels, featmap_strides):
- super(BaseRoIExtractor, self).__init__()
- self.roi_layers = self.build_roi_layers(roi_layer, featmap_strides)
- self.out_channels = out_channels
- self.featmap_strides = featmap_strides
- self.fp16_enabled = False
-
- @property
- def num_inputs(self):
- """int: Number of input feature maps."""
- return len(self.featmap_strides)
-
- def init_weights(self):
- pass
-
- def build_roi_layers(self, layer_cfg, featmap_strides):
- """Build RoI operator to extract feature from each level feature map.
-
- Args:
- layer_cfg (dict): Dictionary to construct and config RoI layer
- operation. Options are modules under ``mmcv/ops`` such as
- ``RoIAlign``.
- featmap_strides (List[int]): The stride of input feature map w.r.t
- to the original image size, which would be used to scale RoI
- coordinate (original image coordinate system) to feature
- coordinate system.
-
- Returns:
- nn.ModuleList: The RoI extractor modules for each level feature
- map.
- """
-
- cfg = layer_cfg.copy()
- layer_type = cfg.pop('type')
- assert hasattr(ops, layer_type)
- layer_cls = getattr(ops, layer_type)
- roi_layers = nn.ModuleList(
- [layer_cls(spatial_scale=1 / s, **cfg) for s in featmap_strides])
- return roi_layers
-
- def roi_rescale(self, rois, scale_factor):
- """Scale RoI coordinates by scale factor.
-
- Args:
- rois (torch.Tensor): RoI (Region of Interest), shape (n, 5)
- scale_factor (float): Scale factor that RoI will be multiplied by.
-
- Returns:
- torch.Tensor: Scaled RoI.
- """
-
- cx = (rois[:, 1] + rois[:, 3]) * 0.5
- cy = (rois[:, 2] + rois[:, 4]) * 0.5
- w = rois[:, 3] - rois[:, 1]
- h = rois[:, 4] - rois[:, 2]
- new_w = w * scale_factor
- new_h = h * scale_factor
- x1 = cx - new_w * 0.5
- x2 = cx + new_w * 0.5
- y1 = cy - new_h * 0.5
- y2 = cy + new_h * 0.5
- new_rois = torch.stack((rois[:, 0], x1, y1, x2, y2), dim=-1)
- return new_rois
-
- @abstractmethod
- def forward(self, feats, rois, roi_scale_factor=None):
- pass
diff --git a/spaces/Cat125/text-generator-v2/classes.py b/spaces/Cat125/text-generator-v2/classes.py
deleted file mode 100644
index 677342e2aa16dc57aeb9462cfb22c32717815876..0000000000000000000000000000000000000000
--- a/spaces/Cat125/text-generator-v2/classes.py
+++ /dev/null
@@ -1,49 +0,0 @@
-from random import choice
-
-import pymorphy3
-
-morph = pymorphy3.MorphAnalyzer()
-
-# The Token class takes in a word, previous word, text, sentence, and a boolean value and creates a
-# token object with attributes such as count, score, and contexts.
-class Token:
- def __init__(self, word, prev_word, text, sentence, starter = False, turbo = False):
- """
- This function initializes a Token with various properties related to a given word and its context
- within a sentence.
-
- :param word: The current word being analyzed
- :param prev_word: The word that comes before the current word in the text
- :param text: a string containing the entire text to be analyzed
- :param sentence: a string representing a sentence in which the word and prev_word occur
- :param turbo: A boolean parameter that, when set to True, skips the morphological analysis of words
- in the sentence and simply adds all words to the context list. This can be useful for faster
- processing, but may result in less accurate context information, defaults to False (optional)
- """
- self.word = word
- self.prev_word = prev_word
- self.count = text.count(prev_word + " " + word)
- self.score = 0
- self.starter = starter
- self.contexts = []
- for w in sentence.strip().split():
- if turbo:
- self.contexts.append(w)
- continue
- result = morph.parse(w)
- if len(result) == 0:
- continue
- result = result[0]
- if 'LATN' in result.tag:
- continue
- if result.tag.POS == 'NOUN':
- self.contexts.append(w)
- self.contexts.append(result.normal_form)
-
- def __repr__(self):
- """
- This function returns a string representation of a Token with information about the previous
- word, current word, number of matches, and number of contexts.
- :return: A string representation of a Token.
- """
- return f"'{self.prev_word} > {self.word} ({'starter, ' if self.starter else ''}{self.count}m, {len(self.contexts)}c)'"
\ No newline at end of file
diff --git a/spaces/ChristopherMarais/Andrew_AI-BB_classification-beta/Dockerfile b/spaces/ChristopherMarais/Andrew_AI-BB_classification-beta/Dockerfile
deleted file mode 100644
index fdaa5c8fc3acab6413d9b0383eeaa6eac09c016a..0000000000000000000000000000000000000000
--- a/spaces/ChristopherMarais/Andrew_AI-BB_classification-beta/Dockerfile
+++ /dev/null
@@ -1,27 +0,0 @@
-# Dockerfile
-
-# The first instruction is what image we want to base our container on
-# We Use an official Python runtime as a parent image
-FROM python:3.11.5
-
-# copy and mount application code to image
-RUN mkdir -p /code
-VOLUME /data:/code
-RUN chmod -R 777 /code/
-COPY . code
-WORKDIR /code
-RUN chmod -R 777 /code/
-
-ENV HF_HOME=/code/.huggingface
-
-# Allows docker to cache installed dependencies between builds
-COPY requirements.txt requirements.txt
-RUN pip install -r requirements.txt
-# add --no-cache-dir as a parameter to install requirements without using cache
-
-EXPOSE 7860
-# CMD ["/launch.sh"]
-
-# runs the production server
-ENTRYPOINT ["python", "mysite/manage.py"]
-CMD ["runserver", "0.0.0.0:7860"]
diff --git a/spaces/CikeyQI/meme-api/meme_generator/memes/forbid/__init__.py b/spaces/CikeyQI/meme-api/meme_generator/memes/forbid/__init__.py
deleted file mode 100644
index c0aa89c8026592ace5a61e9bf577f40ed03b2a57..0000000000000000000000000000000000000000
--- a/spaces/CikeyQI/meme-api/meme_generator/memes/forbid/__init__.py
+++ /dev/null
@@ -1,22 +0,0 @@
-from pathlib import Path
-from typing import List
-
-from meme_generator import add_meme
-from meme_generator.utils import make_jpg_or_gif
-from pil_utils import BuildImage
-
-img_dir = Path(__file__).parent / "images"
-
-
-def forbid(images: List[BuildImage], texts, args):
- frame = BuildImage.open(img_dir / "0.png")
-
- def make(img: BuildImage) -> BuildImage:
- return frame.copy().paste(
- img.resize((304, 324), keep_ratio=True), (0, 0), below=True
- )
-
- return make_jpg_or_gif(images[0], make)
-
-
-add_meme("forbid", forbid, min_images=1, max_images=1, keywords=["禁止", "禁"])
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fsspec/implementations/http.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fsspec/implementations/http.py
deleted file mode 100644
index afd0c2664b295c62b29e0d258c1908e1937dac50..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fsspec/implementations/http.py
+++ /dev/null
@@ -1,862 +0,0 @@
-from __future__ import absolute_import, division, print_function
-
-import asyncio
-import io
-import logging
-import re
-import weakref
-from copy import copy
-from urllib.parse import urlparse
-
-import aiohttp
-import requests
-import yarl
-
-from fsspec.asyn import AbstractAsyncStreamedFile, AsyncFileSystem, sync, sync_wrapper
-from fsspec.callbacks import _DEFAULT_CALLBACK
-from fsspec.exceptions import FSTimeoutError
-from fsspec.spec import AbstractBufferedFile
-from fsspec.utils import DEFAULT_BLOCK_SIZE, isfilelike, nullcontext, tokenize
-
-from ..caching import AllBytes
-
-# https://stackoverflow.com/a/15926317/3821154
-ex = re.compile(r"""<(a|A)\s+(?:[^>]*?\s+)?(href|HREF)=["'](?P[^"']+)""")
-ex2 = re.compile(r"""(?Phttp[s]?://[-a-zA-Z0-9@:%_+.~#?&/=]+)""")
-logger = logging.getLogger("fsspec.http")
-
-
-async def get_client(**kwargs):
- return aiohttp.ClientSession(**kwargs)
-
-
-class HTTPFileSystem(AsyncFileSystem):
- """
- Simple File-System for fetching data via HTTP(S)
-
- ``ls()`` is implemented by loading the parent page and doing a regex
- match on the result. If simple_link=True, anything of the form
- "http(s)://server.com/stuff?thing=other"; otherwise only links within
- HTML href tags will be used.
- """
-
- sep = "/"
-
- def __init__(
- self,
- simple_links=True,
- block_size=None,
- same_scheme=True,
- size_policy=None,
- cache_type="bytes",
- cache_options=None,
- asynchronous=False,
- loop=None,
- client_kwargs=None,
- get_client=get_client,
- encoded=False,
- **storage_options,
- ):
- """
- NB: if this is called async, you must await set_client
-
- Parameters
- ----------
- block_size: int
- Blocks to read bytes; if 0, will default to raw requests file-like
- objects instead of HTTPFile instances
- simple_links: bool
- If True, will consider both HTML tags and anything that looks
- like a URL; if False, will consider only the former.
- same_scheme: True
- When doing ls/glob, if this is True, only consider paths that have
- http/https matching the input URLs.
- size_policy: this argument is deprecated
- client_kwargs: dict
- Passed to aiohttp.ClientSession, see
- https://docs.aiohttp.org/en/stable/client_reference.html
- For example, ``{'auth': aiohttp.BasicAuth('user', 'pass')}``
- get_client: Callable[..., aiohttp.ClientSession]
- A callable which takes keyword arguments and constructs
- an aiohttp.ClientSession. It's state will be managed by
- the HTTPFileSystem class.
- storage_options: key-value
- Any other parameters passed on to requests
- cache_type, cache_options: defaults used in open
- """
- super().__init__(self, asynchronous=asynchronous, loop=loop, **storage_options)
- self.block_size = block_size if block_size is not None else DEFAULT_BLOCK_SIZE
- self.simple_links = simple_links
- self.same_schema = same_scheme
- self.cache_type = cache_type
- self.cache_options = cache_options
- self.client_kwargs = client_kwargs or {}
- self.get_client = get_client
- self.encoded = encoded
- self.kwargs = storage_options
- self._session = None
-
- # Clean caching-related parameters from `storage_options`
- # before propagating them as `request_options` through `self.kwargs`.
- # TODO: Maybe rename `self.kwargs` to `self.request_options` to make
- # it clearer.
- request_options = copy(storage_options)
- self.use_listings_cache = request_options.pop("use_listings_cache", False)
- request_options.pop("listings_expiry_time", None)
- request_options.pop("max_paths", None)
- request_options.pop("skip_instance_cache", None)
- self.kwargs = request_options
-
- @property
- def fsid(self):
- return "http"
-
- def encode_url(self, url):
- return yarl.URL(url, encoded=self.encoded)
-
- @staticmethod
- def close_session(loop, session):
- if loop is not None and loop.is_running():
- try:
- sync(loop, session.close, timeout=0.1)
- return
- except (TimeoutError, FSTimeoutError):
- pass
- connector = getattr(session, "_connector", None)
- if connector is not None:
- # close after loop is dead
- connector._close()
-
- async def set_session(self):
- if self._session is None:
- self._session = await self.get_client(loop=self.loop, **self.client_kwargs)
- if not self.asynchronous:
- weakref.finalize(self, self.close_session, self.loop, self._session)
- return self._session
-
- @classmethod
- def _strip_protocol(cls, path):
- """For HTTP, we always want to keep the full URL"""
- return path
-
- @classmethod
- def _parent(cls, path):
- # override, since _strip_protocol is different for URLs
- par = super()._parent(path)
- if len(par) > 7: # "http://..."
- return par
- return ""
-
- async def _ls_real(self, url, detail=True, **kwargs):
- # ignoring URL-encoded arguments
- kw = self.kwargs.copy()
- kw.update(kwargs)
- logger.debug(url)
- session = await self.set_session()
- async with session.get(self.encode_url(url), **self.kwargs) as r:
- self._raise_not_found_for_status(r, url)
- text = await r.text()
- if self.simple_links:
- links = ex2.findall(text) + [u[2] for u in ex.findall(text)]
- else:
- links = [u[2] for u in ex.findall(text)]
- out = set()
- parts = urlparse(url)
- for l in links:
- if isinstance(l, tuple):
- l = l[1]
- if l.startswith("/") and len(l) > 1:
- # absolute URL on this server
- l = parts.scheme + "://" + parts.netloc + l
- if l.startswith("http"):
- if self.same_schema and l.startswith(url.rstrip("/") + "/"):
- out.add(l)
- elif l.replace("https", "http").startswith(
- url.replace("https", "http").rstrip("/") + "/"
- ):
- # allowed to cross http <-> https
- out.add(l)
- else:
- if l not in ["..", "../"]:
- # Ignore FTP-like "parent"
- out.add("/".join([url.rstrip("/"), l.lstrip("/")]))
- if not out and url.endswith("/"):
- out = await self._ls_real(url.rstrip("/"), detail=False)
- if detail:
- return [
- {
- "name": u,
- "size": None,
- "type": "directory" if u.endswith("/") else "file",
- }
- for u in out
- ]
- else:
- return list(sorted(out))
-
- async def _ls(self, url, detail=True, **kwargs):
-
- if self.use_listings_cache and url in self.dircache:
- out = self.dircache[url]
- else:
- out = await self._ls_real(url, detail=detail, **kwargs)
- self.dircache[url] = out
- return out
-
- ls = sync_wrapper(_ls)
-
- def _raise_not_found_for_status(self, response, url):
- """
- Raises FileNotFoundError for 404s, otherwise uses raise_for_status.
- """
- if response.status == 404:
- raise FileNotFoundError(url)
- response.raise_for_status()
-
- async def _cat_file(self, url, start=None, end=None, **kwargs):
- kw = self.kwargs.copy()
- kw.update(kwargs)
- logger.debug(url)
-
- if start is not None or end is not None:
- if start == end:
- return b""
- headers = kw.pop("headers", {}).copy()
-
- headers["Range"] = await self._process_limits(url, start, end)
- kw["headers"] = headers
- session = await self.set_session()
- async with session.get(self.encode_url(url), **kw) as r:
- out = await r.read()
- self._raise_not_found_for_status(r, url)
- return out
-
- async def _get_file(
- self, rpath, lpath, chunk_size=5 * 2**20, callback=_DEFAULT_CALLBACK, **kwargs
- ):
- kw = self.kwargs.copy()
- kw.update(kwargs)
- logger.debug(rpath)
- session = await self.set_session()
- async with session.get(self.encode_url(rpath), **kw) as r:
- try:
- size = int(r.headers["content-length"])
- except (ValueError, KeyError):
- size = None
-
- callback.set_size(size)
- self._raise_not_found_for_status(r, rpath)
- if isfilelike(lpath):
- outfile = lpath
- else:
- outfile = open(lpath, "wb")
-
- try:
- chunk = True
- while chunk:
- chunk = await r.content.read(chunk_size)
- outfile.write(chunk)
- callback.relative_update(len(chunk))
- finally:
- if not isfilelike(lpath):
- outfile.close()
-
- async def _put_file(
- self,
- lpath,
- rpath,
- chunk_size=5 * 2**20,
- callback=_DEFAULT_CALLBACK,
- method="post",
- **kwargs,
- ):
- async def gen_chunks():
- # Support passing arbitrary file-like objects
- # and use them instead of streams.
- if isinstance(lpath, io.IOBase):
- context = nullcontext(lpath)
- use_seek = False # might not support seeking
- else:
- context = open(lpath, "rb")
- use_seek = True
-
- with context as f:
- if use_seek:
- callback.set_size(f.seek(0, 2))
- f.seek(0)
- else:
- callback.set_size(getattr(f, "size", None))
-
- chunk = f.read(chunk_size)
- while chunk:
- yield chunk
- callback.relative_update(len(chunk))
- chunk = f.read(chunk_size)
-
- kw = self.kwargs.copy()
- kw.update(kwargs)
- session = await self.set_session()
-
- method = method.lower()
- if method not in ("post", "put"):
- raise ValueError(
- f"method has to be either 'post' or 'put', not: {method!r}"
- )
-
- meth = getattr(session, method)
- async with meth(rpath, data=gen_chunks(), **kw) as resp:
- self._raise_not_found_for_status(resp, rpath)
-
- async def _exists(self, path, **kwargs):
- kw = self.kwargs.copy()
- kw.update(kwargs)
- try:
- logger.debug(path)
- session = await self.set_session()
- r = await session.get(self.encode_url(path), **kw)
- async with r:
- return r.status < 400
- except (requests.HTTPError, aiohttp.ClientError):
- return False
-
- async def _isfile(self, path, **kwargs):
- return await self._exists(path, **kwargs)
-
- def _open(
- self,
- path,
- mode="rb",
- block_size=None,
- autocommit=None, # XXX: This differs from the base class.
- cache_type=None,
- cache_options=None,
- size=None,
- **kwargs,
- ):
- """Make a file-like object
-
- Parameters
- ----------
- path: str
- Full URL with protocol
- mode: string
- must be "rb"
- block_size: int or None
- Bytes to download in one request; use instance value if None. If
- zero, will return a streaming Requests file-like instance.
- kwargs: key-value
- Any other parameters, passed to requests calls
- """
- if mode != "rb":
- raise NotImplementedError
- block_size = block_size if block_size is not None else self.block_size
- kw = self.kwargs.copy()
- kw["asynchronous"] = self.asynchronous
- kw.update(kwargs)
- size = size or self.info(path, **kwargs)["size"]
- session = sync(self.loop, self.set_session)
- if block_size and size:
- return HTTPFile(
- self,
- path,
- session=session,
- block_size=block_size,
- mode=mode,
- size=size,
- cache_type=cache_type or self.cache_type,
- cache_options=cache_options or self.cache_options,
- loop=self.loop,
- **kw,
- )
- else:
- return HTTPStreamFile(
- self,
- path,
- mode=mode,
- loop=self.loop,
- session=session,
- **kw,
- )
-
- async def open_async(self, path, mode="rb", size=None, **kwargs):
- session = await self.set_session()
- if size is None:
- try:
- size = (await self._info(path, **kwargs))["size"]
- except FileNotFoundError:
- pass
- return AsyncStreamFile(
- self,
- path,
- loop=self.loop,
- session=session,
- size=size,
- **kwargs,
- )
-
- def ukey(self, url):
- """Unique identifier; assume HTTP files are static, unchanging"""
- return tokenize(url, self.kwargs, self.protocol)
-
- async def _info(self, url, **kwargs):
- """Get info of URL
-
- Tries to access location via HEAD, and then GET methods, but does
- not fetch the data.
-
- It is possible that the server does not supply any size information, in
- which case size will be given as None (and certain operations on the
- corresponding file will not work).
- """
- info = {}
- session = await self.set_session()
-
- for policy in ["head", "get"]:
- try:
- info.update(
- await _file_info(
- self.encode_url(url),
- size_policy=policy,
- session=session,
- **self.kwargs,
- **kwargs,
- )
- )
- if info.get("size") is not None:
- break
- except Exception as exc:
- if policy == "get":
- # If get failed, then raise a FileNotFoundError
- raise FileNotFoundError(url) from exc
- logger.debug(str(exc))
-
- return {"name": url, "size": None, **info, "type": "file"}
-
- async def _glob(self, path, **kwargs):
- """
- Find files by glob-matching.
-
- This implementation is idntical to the one in AbstractFileSystem,
- but "?" is not considered as a character for globbing, because it is
- so common in URLs, often identifying the "query" part.
- """
- import re
-
- ends = path.endswith("/")
- path = self._strip_protocol(path)
- indstar = path.find("*") if path.find("*") >= 0 else len(path)
- indbrace = path.find("[") if path.find("[") >= 0 else len(path)
-
- ind = min(indstar, indbrace)
-
- detail = kwargs.pop("detail", False)
-
- if not has_magic(path):
- root = path
- depth = 1
- if ends:
- path += "/*"
- elif await self._exists(path):
- if not detail:
- return [path]
- else:
- return {path: await self._info(path)}
- else:
- if not detail:
- return [] # glob of non-existent returns empty
- else:
- return {}
- elif "/" in path[:ind]:
- ind2 = path[:ind].rindex("/")
- root = path[: ind2 + 1]
- depth = None if "**" in path else path[ind2 + 1 :].count("/") + 1
- else:
- root = ""
- depth = None if "**" in path else path[ind + 1 :].count("/") + 1
-
- allpaths = await self._find(
- root, maxdepth=depth, withdirs=True, detail=True, **kwargs
- )
- # Escape characters special to python regex, leaving our supported
- # special characters in place.
- # See https://www.gnu.org/software/bash/manual/html_node/Pattern-Matching.html
- # for shell globbing details.
- pattern = (
- "^"
- + (
- path.replace("\\", r"\\")
- .replace(".", r"\.")
- .replace("+", r"\+")
- .replace("//", "/")
- .replace("(", r"\(")
- .replace(")", r"\)")
- .replace("|", r"\|")
- .replace("^", r"\^")
- .replace("$", r"\$")
- .replace("{", r"\{")
- .replace("}", r"\}")
- .rstrip("/")
- )
- + "$"
- )
- pattern = re.sub("[*]{2}", "=PLACEHOLDER=", pattern)
- pattern = re.sub("[*]", "[^/]*", pattern)
- pattern = re.compile(pattern.replace("=PLACEHOLDER=", ".*"))
- out = {
- p: allpaths[p]
- for p in sorted(allpaths)
- if pattern.match(p.replace("//", "/").rstrip("/"))
- }
- if detail:
- return out
- else:
- return list(out)
-
- async def _isdir(self, path):
- # override, since all URLs are (also) files
- try:
- return bool(await self._ls(path))
- except (FileNotFoundError, ValueError):
- return False
-
-
-class HTTPFile(AbstractBufferedFile):
- """
- A file-like object pointing to a remove HTTP(S) resource
-
- Supports only reading, with read-ahead of a predermined block-size.
-
- In the case that the server does not supply the filesize, only reading of
- the complete file in one go is supported.
-
- Parameters
- ----------
- url: str
- Full URL of the remote resource, including the protocol
- session: requests.Session or None
- All calls will be made within this session, to avoid restarting
- connections where the server allows this
- block_size: int or None
- The amount of read-ahead to do, in bytes. Default is 5MB, or the value
- configured for the FileSystem creating this file
- size: None or int
- If given, this is the size of the file in bytes, and we don't attempt
- to call the server to find the value.
- kwargs: all other key-values are passed to requests calls.
- """
-
- def __init__(
- self,
- fs,
- url,
- session=None,
- block_size=None,
- mode="rb",
- cache_type="bytes",
- cache_options=None,
- size=None,
- loop=None,
- asynchronous=False,
- **kwargs,
- ):
- if mode != "rb":
- raise NotImplementedError("File mode not supported")
- self.asynchronous = asynchronous
- self.url = url
- self.session = session
- self.details = {"name": url, "size": size, "type": "file"}
- super().__init__(
- fs=fs,
- path=url,
- mode=mode,
- block_size=block_size,
- cache_type=cache_type,
- cache_options=cache_options,
- **kwargs,
- )
- self.loop = loop
-
- def read(self, length=-1):
- """Read bytes from file
-
- Parameters
- ----------
- length: int
- Read up to this many bytes. If negative, read all content to end of
- file. If the server has not supplied the filesize, attempting to
- read only part of the data will raise a ValueError.
- """
- if (
- (length < 0 and self.loc == 0) # explicit read all
- # but not when the size is known and fits into a block anyways
- and not (self.size is not None and self.size <= self.blocksize)
- ):
- self._fetch_all()
- if self.size is None:
- if length < 0:
- self._fetch_all()
- else:
- length = min(self.size - self.loc, length)
- return super().read(length)
-
- async def async_fetch_all(self):
- """Read whole file in one shot, without caching
-
- This is only called when position is still at zero,
- and read() is called without a byte-count.
- """
- logger.debug(f"Fetch all for {self}")
- if not isinstance(self.cache, AllBytes):
- r = await self.session.get(self.fs.encode_url(self.url), **self.kwargs)
- async with r:
- r.raise_for_status()
- out = await r.read()
- self.cache = AllBytes(
- size=len(out), fetcher=None, blocksize=None, data=out
- )
- self.size = len(out)
-
- _fetch_all = sync_wrapper(async_fetch_all)
-
- def _parse_content_range(self, headers):
- """Parse the Content-Range header"""
- s = headers.get("Content-Range", "")
- m = re.match(r"bytes (\d+-\d+|\*)/(\d+|\*)", s)
- if not m:
- return None, None, None
-
- if m[1] == "*":
- start = end = None
- else:
- start, end = [int(x) for x in m[1].split("-")]
- total = None if m[2] == "*" else int(m[2])
- return start, end, total
-
- async def async_fetch_range(self, start, end):
- """Download a block of data
-
- The expectation is that the server returns only the requested bytes,
- with HTTP code 206. If this is not the case, we first check the headers,
- and then stream the output - if the data size is bigger than we
- requested, an exception is raised.
- """
- logger.debug(f"Fetch range for {self}: {start}-{end}")
- kwargs = self.kwargs.copy()
- headers = kwargs.pop("headers", {}).copy()
- headers["Range"] = "bytes=%i-%i" % (start, end - 1)
- logger.debug(str(self.url) + " : " + headers["Range"])
- r = await self.session.get(
- self.fs.encode_url(self.url), headers=headers, **kwargs
- )
- async with r:
- if r.status == 416:
- # range request outside file
- return b""
- r.raise_for_status()
-
- # If the server has handled the range request, it should reply
- # with status 206 (partial content). But we'll guess that a suitable
- # Content-Range header or a Content-Length no more than the
- # requested range also mean we have got the desired range.
- response_is_range = (
- r.status == 206
- or self._parse_content_range(r.headers)[0] == start
- or int(r.headers.get("Content-Length", end + 1)) <= end - start
- )
-
- if response_is_range:
- # partial content, as expected
- out = await r.read()
- elif start > 0:
- raise ValueError(
- "The HTTP server doesn't appear to support range requests. "
- "Only reading this file from the beginning is supported. "
- "Open with block_size=0 for a streaming file interface."
- )
- else:
- # Response is not a range, but we want the start of the file,
- # so we can read the required amount anyway.
- cl = 0
- out = []
- while True:
- chunk = await r.content.read(2**20)
- # data size unknown, let's read until we have enough
- if chunk:
- out.append(chunk)
- cl += len(chunk)
- if cl > end - start:
- break
- else:
- break
- out = b"".join(out)[: end - start]
- return out
-
- _fetch_range = sync_wrapper(async_fetch_range)
-
- def __reduce__(self):
- return (
- reopen,
- (
- self.fs,
- self.url,
- self.mode,
- self.blocksize,
- self.cache.name if self.cache else "none",
- self.size,
- ),
- )
-
-
-def reopen(fs, url, mode, blocksize, cache_type, size=None):
- return fs.open(
- url, mode=mode, block_size=blocksize, cache_type=cache_type, size=size
- )
-
-
-magic_check = re.compile("([*[])")
-
-
-def has_magic(s):
- match = magic_check.search(s)
- return match is not None
-
-
-class HTTPStreamFile(AbstractBufferedFile):
- def __init__(self, fs, url, mode="rb", loop=None, session=None, **kwargs):
- self.asynchronous = kwargs.pop("asynchronous", False)
- self.url = url
- self.loop = loop
- self.session = session
- if mode != "rb":
- raise ValueError
- self.details = {"name": url, "size": None}
- super().__init__(fs=fs, path=url, mode=mode, cache_type="none", **kwargs)
-
- async def cor():
- r = await self.session.get(self.fs.encode_url(url), **kwargs).__aenter__()
- self.fs._raise_not_found_for_status(r, url)
- return r
-
- self.r = sync(self.loop, cor)
-
- def seek(self, loc, whence=0):
- if loc == 0 and whence == 1:
- return
- if loc == self.loc and whence == 0:
- return
- raise ValueError("Cannot seek streaming HTTP file")
-
- async def _read(self, num=-1):
- out = await self.r.content.read(num)
- self.loc += len(out)
- return out
-
- read = sync_wrapper(_read)
-
- async def _close(self):
- self.r.close()
-
- def close(self):
- asyncio.run_coroutine_threadsafe(self._close(), self.loop)
- super().close()
-
- def __reduce__(self):
- return reopen, (self.fs, self.url, self.mode, self.blocksize, self.cache.name)
-
-
-class AsyncStreamFile(AbstractAsyncStreamedFile):
- def __init__(
- self, fs, url, mode="rb", loop=None, session=None, size=None, **kwargs
- ):
- self.url = url
- self.session = session
- self.r = None
- if mode != "rb":
- raise ValueError
- self.details = {"name": url, "size": None}
- self.kwargs = kwargs
- super().__init__(fs=fs, path=url, mode=mode, cache_type="none")
- self.size = size
-
- async def read(self, num=-1):
- if self.r is None:
- r = await self.session.get(
- self.fs.encode_url(self.url), **self.kwargs
- ).__aenter__()
- self.fs._raise_not_found_for_status(r, self.url)
- self.r = r
- out = await self.r.content.read(num)
- self.loc += len(out)
- return out
-
- async def close(self):
- if self.r is not None:
- self.r.close()
- self.r = None
- await super().close()
-
-
-async def get_range(session, url, start, end, file=None, **kwargs):
- # explicit get a range when we know it must be safe
- kwargs = kwargs.copy()
- headers = kwargs.pop("headers", {}).copy()
- headers["Range"] = "bytes=%i-%i" % (start, end - 1)
- r = await session.get(url, headers=headers, **kwargs)
- r.raise_for_status()
- async with r:
- out = await r.read()
- if file:
- with open(file, "rb+") as f:
- f.seek(start)
- f.write(out)
- else:
- return out
-
-
-async def _file_info(url, session, size_policy="head", **kwargs):
- """Call HEAD on the server to get details about the file (size/checksum etc.)
-
- Default operation is to explicitly allow redirects and use encoding
- 'identity' (no compression) to get the true size of the target.
- """
- logger.debug("Retrieve file size for %s" % url)
- kwargs = kwargs.copy()
- ar = kwargs.pop("allow_redirects", True)
- head = kwargs.get("headers", {}).copy()
- head["Accept-Encoding"] = "identity"
- kwargs["headers"] = head
-
- info = {}
- if size_policy == "head":
- r = await session.head(url, allow_redirects=ar, **kwargs)
- elif size_policy == "get":
- r = await session.get(url, allow_redirects=ar, **kwargs)
- else:
- raise TypeError('size_policy must be "head" or "get", got %s' "" % size_policy)
- async with r:
- r.raise_for_status()
-
- # TODO:
- # recognise lack of 'Accept-Ranges',
- # or 'Accept-Ranges': 'none' (not 'bytes')
- # to mean streaming only, no random access => return None
- if "Content-Length" in r.headers:
- info["size"] = int(r.headers["Content-Length"])
- elif "Content-Range" in r.headers:
- info["size"] = int(r.headers["Content-Range"].split("/")[1])
-
- for checksum_field in ["ETag", "Content-MD5", "Digest"]:
- if r.headers.get(checksum_field):
- info[checksum_field] = r.headers[checksum_field]
-
- return info
-
-
-async def _file_size(url, session=None, *args, **kwargs):
- if session is None:
- session = await get_client()
- info = await _file_info(url, session=session, *args, **kwargs)
- return info.get("size")
-
-
-file_size = sync_wrapper(_file_size)
diff --git "a/spaces/Daextream/Whisper-Auto-Subtitled-Video-Generator/01_\360\237\216\245_Input_YouTube_Link.py" "b/spaces/Daextream/Whisper-Auto-Subtitled-Video-Generator/01_\360\237\216\245_Input_YouTube_Link.py"
deleted file mode 100644
index cb4dae2734aa9f8f01570fc73cfe3221dce7d1e1..0000000000000000000000000000000000000000
--- "a/spaces/Daextream/Whisper-Auto-Subtitled-Video-Generator/01_\360\237\216\245_Input_YouTube_Link.py"
+++ /dev/null
@@ -1,258 +0,0 @@
-import whisper
-from pytube import YouTube
-import requests
-import time
-import streamlit as st
-from streamlit_lottie import st_lottie
-import numpy as np
-import os
-from typing import Iterator
-from io import StringIO
-from utils import write_vtt, write_srt
-import ffmpeg
-from languages import LANGUAGES
-
-st.set_page_config(page_title="Auto Subtitled Video Generator", page_icon=":movie_camera:", layout="wide")
-
-# Define a function that we can use to load lottie files from a link.
-@st.cache()
-def load_lottieurl(url: str):
- r = requests.get(url)
- if r.status_code != 200:
- return None
- return r.json()
-
-col1, col2 = st.columns([1, 3])
-with col1:
- lottie = load_lottieurl("https://assets8.lottiefiles.com/packages/lf20_jh9gfdye.json")
- st_lottie(lottie)
-
-with col2:
- st.write("""
- ## Auto Subtitled Video Generator
- ##### Input a YouTube video link and get a video with subtitles.
- ###### ➠ If you want to transcribe the video in its original language, select the task as "Transcribe"
- ###### ➠ If you want to translate the subtitles to English, select the task as "Translate"
- ###### I recommend starting with the base model and then experimenting with the larger models, the small and medium models often work well. """)
-
-
-@st.cache(allow_output_mutation=True)
-def populate_metadata(link):
- yt = YouTube(link)
- author = yt.author
- title = yt.title
- description = yt.description
- thumbnail = yt.thumbnail_url
- length = yt.length
- views = yt.views
- return author, title, description, thumbnail, length, views
-
-
-@st.cache(allow_output_mutation=True)
-def download_video(link):
- yt = YouTube(link)
- video = yt.streams.filter(progressive=True, file_extension='mp4').order_by('resolution').desc().first().download()
- return video
-
-
-def convert(seconds):
- return time.strftime("%H:%M:%S", time.gmtime(seconds))
-
-
-loaded_model = whisper.load_model("base")
-current_size = "None"
-
-
-@st.cache(allow_output_mutation=True)
-def change_model(current_size, size):
- if current_size != size:
- loaded_model = whisper.load_model(size)
- return loaded_model
- else:
- raise Exception("Model size is the same as the current size.")
-
-
-@st.cache(allow_output_mutation=True)
-def inference(link, loaded_model, task):
- yt = YouTube(link)
- path = yt.streams.filter(only_audio=True)[0].download(filename="audio.mp3")
- if task == "Transcribe":
- options = dict(task="transcribe", best_of=5)
- results = loaded_model.transcribe(path, **options)
- vtt = getSubs(results["segments"], "vtt", 80)
- srt = getSubs(results["segments"], "srt", 80)
- lang = results["language"]
- return results["text"], vtt, srt, lang
- elif task == "Translate":
- options = dict(task="translate", best_of=5)
- results = loaded_model.transcribe(path, **options)
- vtt = getSubs(results["segments"], "vtt", 80)
- srt = getSubs(results["segments"], "srt", 80)
- lang = results["language"]
- return results["text"], vtt, srt, lang
- else:
- raise ValueError("Task not supported")
-
-
-@st.cache(allow_output_mutation=True)
-def getSubs(segments: Iterator[dict], format: str, maxLineWidth: int) -> str:
- segmentStream = StringIO()
-
- if format == 'vtt':
- write_vtt(segments, file=segmentStream, maxLineWidth=maxLineWidth)
- elif format == 'srt':
- write_srt(segments, file=segmentStream, maxLineWidth=maxLineWidth)
- else:
- raise Exception("Unknown format " + format)
-
- segmentStream.seek(0)
- return segmentStream.read()
-
-
-def get_language_code(language):
- if language in LANGUAGES.keys():
- detected_language = LANGUAGES[language]
- return detected_language
- else:
- raise ValueError("Language not supported")
-
-
-def generate_subtitled_video(video, audio, transcript):
- video_file = ffmpeg.input(video)
- audio_file = ffmpeg.input(audio)
- ffmpeg.concat(video_file.filter("subtitles", transcript), audio_file, v=1, a=1).output("final.mp4").run(quiet=True, overwrite_output=True)
- video_with_subs = open("final.mp4", "rb")
- return video_with_subs
-
-
-def main():
- size = st.selectbox("Select Model Size (The larger the model, the more accurate the transcription will be, but it will take longer)", ["tiny", "base", "small", "medium", "large"], index=1)
- loaded_model = change_model(current_size, size)
- st.write(f"Model is {'multilingual' if loaded_model.is_multilingual else 'English-only'} "
- f"and has {sum(np.prod(p.shape) for p in loaded_model.parameters()):,} parameters.")
- link = st.text_input("YouTube Link (The longer the video, the longer the processing time)")
- task = st.selectbox("Select Task", ["Transcribe", "Translate"], index=0)
- if task == "Transcribe":
- if st.button("Transcribe"):
- author, title, description, thumbnail, length, views = populate_metadata(link)
- results = inference(link, loaded_model, task)
- video = download_video(link)
- lang = results[3]
- detected_language = get_language_code(lang)
-
- col3, col4 = st.columns(2)
- col5, col6, col7, col8 = st.columns(4)
- col9, col10 = st.columns(2)
- with col3:
- st.video(video)
-
- # Write the results to a .txt file and download it.
- with open("transcript.txt", "w+", encoding='utf8') as f:
- f.writelines(results[0])
- f.close()
- with open(os.path.join(os.getcwd(), "transcript.txt"), "rb") as f:
- datatxt = f.read()
-
- with open("transcript.vtt", "w+",encoding='utf8') as f:
- f.writelines(results[1])
- f.close()
- with open(os.path.join(os.getcwd(), "transcript.vtt"), "rb") as f:
- datavtt = f.read()
-
- with open("transcript.srt", "w+",encoding='utf8') as f:
- f.writelines(results[2])
- f.close()
- with open(os.path.join(os.getcwd(), "transcript.srt"), "rb") as f:
- datasrt = f.read()
-
- with col5:
- st.download_button(label="Download Transcript (.txt)",
- data=datatxt,
- file_name="transcript.txt")
- with col6:
- st.download_button(label="Download Transcript (.vtt)",
- data=datavtt,
- file_name="transcript.vtt")
- with col7:
- st.download_button(label="Download Transcript (.srt)",
- data=datasrt,
- file_name="transcript.srt")
- with col9:
- st.success("You can download the transcript in .srt format, edit it (if you need to) and upload it to YouTube to create subtitles for your video.")
- with col10:
- st.info("Streamlit refreshes after the download button is clicked. The data is cached so you can download the transcript again without having to transcribe the video again.")
-
- with col4:
- with st.spinner("Generating Subtitled Video"):
- video_with_subs = generate_subtitled_video(video, "audio.mp3", "transcript.srt")
- st.video(video_with_subs)
- st.balloons()
- with col8:
- st.download_button(label="Download Subtitled Video",
- data=video_with_subs,
- file_name=f"{title} with subtitles.mp4")
- elif task == "Translate":
- if st.button("Translate to English"):
- author, title, description, thumbnail, length, views = populate_metadata(link)
- results = inference(link, loaded_model, task)
- video = download_video(link)
- lang = results[3]
- detected_language = get_language_code(lang)
-
- col3, col4 = st.columns(2)
- col5, col6, col7, col8 = st.columns(4)
- col9, col10 = st.columns(2)
- with col3:
- st.video(video)
-
- # Write the results to a .txt file and download it.
- with open("transcript.txt", "w+", encoding='utf8') as f:
- f.writelines(results[0])
- f.close()
- with open(os.path.join(os.getcwd(), "transcript.txt"), "rb") as f:
- datatxt = f.read()
-
- with open("transcript.vtt", "w+",encoding='utf8') as f:
- f.writelines(results[1])
- f.close()
- with open(os.path.join(os.getcwd(), "transcript.vtt"), "rb") as f:
- datavtt = f.read()
-
- with open("transcript.srt", "w+",encoding='utf8') as f:
- f.writelines(results[2])
- f.close()
- with open(os.path.join(os.getcwd(), "transcript.srt"), "rb") as f:
- datasrt = f.read()
- with col5:
- st.download_button(label="Download Transcript (.txt)",
- data=datatxt,
- file_name="transcript.txt")
- with col6:
- st.download_button(label="Download Transcript (.vtt)",
- data=datavtt,
- file_name="transcript.vtt")
- with col7:
- st.download_button(label="Download Transcript (.srt)",
- data=datasrt,
- file_name="transcript.srt")
- with col9:
- st.success("You can download the transcript in .srt format, edit it (if you need to) and upload it to YouTube to create subtitles for your video.")
- with col10:
- st.info("Streamlit refreshes after the download button is clicked. The data is cached so you can download the transcript again without having to transcribe the video again.")
-
- with col4:
- with st.spinner("Generating Subtitled Video"):
- video_with_subs = generate_subtitled_video(video, "audio.mp3", "transcript.srt")
- st.video(video_with_subs)
- st.balloons()
- with col8:
- st.download_button(label="Download Subtitled Video",
- data=video_with_subs,
- file_name=f"{title} with subtitles.mp4")
- else:
- st.error("Please select a task.")
-
-
-if __name__ == "__main__":
- main()
- st.markdown("###### Made with :heart: by [@BatuhanYılmaz](https://twitter.com/batuhan3326) [](https://www.buymeacoffee.com/batuhanylmz)")
\ No newline at end of file
diff --git a/spaces/Danielzero/GPT3.5/assets/custom.css b/spaces/Danielzero/GPT3.5/assets/custom.css
deleted file mode 100644
index af5e9f2118b843b3bbd7627ed45e970c20b13bef..0000000000000000000000000000000000000000
--- a/spaces/Danielzero/GPT3.5/assets/custom.css
+++ /dev/null
@@ -1,353 +0,0 @@
-:root {
- --chatbot-color-light: #F3F3F3;
- --chatbot-color-dark: #121111;
-}
-
-#app_title {
- font-weight: var(--prose-header-text-weight);
- font-size: var(--text-xxl);
- line-height: 1.3;
- text-align: left;
- margin-top: 6px;
- white-space: nowrap;
-}
-#description {
- text-align: center;
- margin:16px 0
-}
-
-/* 覆盖gradio的页脚信息QAQ */
-/* footer {
- display: none !important;
-} */
-#footer {
- text-align: center;
-}
-#footer div {
- display: inline-block;
-}
-#footer .versions{
- font-size: 85%;
- opacity: 0.85;
-}
-
-#float_display {
- position: absolute;
- max-height: 30px;
-}
-/* user_info */
-#user_info {
- white-space: nowrap;
- position: absolute; left: 8em; top: .2em;
- z-index: var(--layer-2);
- box-shadow: var(--block-shadow);
- border: none; border-radius: var(--block-label-radius);
- background: var(--color-accent);
- padding: var(--block-label-padding);
- font-size: var(--block-label-text-size); line-height: var(--line-sm);
- width: auto; min-height: 30px!important;
- opacity: 1;
- transition: opacity 0.3s ease-in-out;
-}
-#user_info .wrap {
- opacity: 0;
-}
-#user_info p {
- color: white;
- font-weight: var(--block-label-text-weight);
-}
-#user_info.hideK {
- opacity: 0;
- transition: opacity 1s ease-in-out;
-}
-
-/* status_display */
-#status_display {
- display: flex;
- min-height: 2em;
- align-items: flex-end;
- justify-content: flex-end;
-}
-#status_display p {
- font-size: .85em;
- font-family: monospace;
- color: var(--body-text-color-subdued);
-}
-
-#status_display {
- transition: all 0.6s;
-}
-#chuanhu_chatbot {
- transition: height 0.3s ease;
-}
-
-/* usage_display */
-.insert_block {
- position: relative;
- margin: 0;
- padding: .5em 1em;
- box-shadow: var(--block-shadow);
- border-width: var(--block-border-width);
- border-color: var(--block-border-color);
- border-radius: var(--block-radius);
- background: var(--block-background-fill);
- width: 100%;
- line-height: var(--line-sm);
- min-height: 2em;
-}
-#usage_display p, #usage_display span {
- margin: 0;
- font-size: .85em;
- color: var(--body-text-color-subdued);
-}
-.progress-bar {
- background-color: var(--input-background-fill);;
- margin: 0 1em;
- height: 20px;
- border-radius: 10px;
- overflow: hidden;
-}
-.progress {
- background-color: var(--block-title-background-fill);
- height: 100%;
- border-radius: 10px;
- text-align: right;
- transition: width 0.5s ease-in-out;
-}
-.progress-text {
- /* color: white; */
- color: var(--color-accent) !important;
- font-size: 1em !important;
- font-weight: bold;
- padding-right: 10px;
- line-height: 20px;
-}
-
-.apSwitch {
- top: 2px;
- display: inline-block;
- height: 24px;
- position: relative;
- width: 48px;
- border-radius: 12px;
-}
-.apSwitch input {
- display: none !important;
-}
-.apSlider {
- background-color: var(--block-label-background-fill);
- bottom: 0;
- cursor: pointer;
- left: 0;
- position: absolute;
- right: 0;
- top: 0;
- transition: .4s;
- font-size: 18px;
- border-radius: 12px;
-}
-.apSlider::before {
- bottom: -1.5px;
- left: 1px;
- position: absolute;
- transition: .4s;
- content: "🌞";
-}
-input:checked + .apSlider {
- background-color: var(--block-label-background-fill);
-}
-input:checked + .apSlider::before {
- transform: translateX(23px);
- content:"🌚";
-}
-
-#submit_btn, #cancel_btn {
- height: 42px !important;
-}
-#submit_btn::before {
- content: url("data:image/svg+xml, %3Csvg width='21px' height='20px' viewBox='0 0 21 20' version='1.1' xmlns='http://www.w3.org/2000/svg' xmlns:xlink='http://www.w3.org/1999/xlink'%3E %3Cg id='page' stroke='none' stroke-width='1' fill='none' fill-rule='evenodd'%3E %3Cg id='send' transform='translate(0.435849, 0.088463)' fill='%23FFFFFF' fill-rule='nonzero'%3E %3Cpath d='M0.579148261,0.0428666046 C0.301105539,-0.0961547561 -0.036517765,0.122307382 0.0032026237,0.420210298 L1.4927172,18.1553639 C1.5125774,18.4334066 1.79062012,18.5922882 2.04880264,18.4929872 L8.24518329,15.8913017 L11.6412765,19.7441794 C11.8597387,19.9825018 12.2370824,19.8832008 12.3165231,19.5852979 L13.9450591,13.4882182 L19.7839562,11.0255541 C20.0619989,10.8865327 20.0818591,10.4694687 19.7839562,10.3105871 L0.579148261,0.0428666046 Z M11.6138902,17.0883151 L9.85385903,14.7195502 L0.718169621,0.618812241 L12.69945,12.9346347 L11.6138902,17.0883151 Z' id='shape'%3E%3C/path%3E %3C/g%3E %3C/g%3E %3C/svg%3E");
- height: 21px;
-}
-#cancel_btn::before {
- content: url("data:image/svg+xml,%3Csvg width='21px' height='21px' viewBox='0 0 21 21' version='1.1' xmlns='http://www.w3.org/2000/svg' xmlns:xlink='http://www.w3.org/1999/xlink'%3E %3Cg id='pg' stroke='none' stroke-width='1' fill='none' fill-rule='evenodd'%3E %3Cpath d='M10.2072007,20.088463 C11.5727865,20.088463 12.8594566,19.8259823 14.067211,19.3010209 C15.2749653,18.7760595 16.3386126,18.0538087 17.2581528,17.1342685 C18.177693,16.2147282 18.8982283,15.1527965 19.4197586,13.9484733 C19.9412889,12.7441501 20.202054,11.4557644 20.202054,10.0833163 C20.202054,8.71773046 19.9395733,7.43106036 19.4146119,6.22330603 C18.8896505,5.01555169 18.1673997,3.95018885 17.2478595,3.0272175 C16.3283192,2.10424615 15.2646719,1.3837109 14.0569176,0.865611739 C12.8491633,0.34751258 11.5624932,0.088463 10.1969073,0.088463 C8.83132146,0.088463 7.54636692,0.34751258 6.34204371,0.865611739 C5.1377205,1.3837109 4.07407321,2.10424615 3.15110186,3.0272175 C2.22813051,3.95018885 1.5058797,5.01555169 0.984349419,6.22330603 C0.46281914,7.43106036 0.202054,8.71773046 0.202054,10.0833163 C0.202054,11.4557644 0.4645347,12.7441501 0.9894961,13.9484733 C1.5144575,15.1527965 2.23670831,16.2147282 3.15624854,17.1342685 C4.07578877,18.0538087 5.1377205,18.7760595 6.34204371,19.3010209 C7.54636692,19.8259823 8.83475258,20.088463 10.2072007,20.088463 Z M10.2072007,18.2562448 C9.07493099,18.2562448 8.01471483,18.0452309 7.0265522,17.6232031 C6.03838956,17.2011753 5.17031614,16.6161693 4.42233192,15.8681851 C3.6743477,15.1202009 3.09105726,14.2521274 2.67246059,13.2639648 C2.25386392,12.2758022 2.04456558,11.215586 2.04456558,10.0833163 C2.04456558,8.95104663 2.25386392,7.89083047 2.67246059,6.90266784 C3.09105726,5.9145052 3.6743477,5.04643178 4.42233192,4.29844756 C5.17031614,3.55046334 6.036674,2.9671729 7.02140552,2.54857623 C8.00613703,2.12997956 9.06463763,1.92068122 10.1969073,1.92068122 C11.329177,1.92068122 12.3911087,2.12997956 13.3827025,2.54857623 C14.3742962,2.9671729 15.2440852,3.55046334 15.9920694,4.29844756 C16.7400537,5.04643178 17.3233441,5.9145052 17.7419408,6.90266784 C18.1605374,7.89083047 18.3698358,8.95104663 18.3698358,10.0833163 C18.3698358,11.215586 18.1605374,12.2758022 17.7419408,13.2639648 C17.3233441,14.2521274 16.7400537,15.1202009 15.9920694,15.8681851 C15.2440852,16.6161693 14.3760118,17.2011753 13.3878492,17.6232031 C12.3996865,18.0452309 11.3394704,18.2562448 10.2072007,18.2562448 Z M7.65444721,13.6242324 L12.7496608,13.6242324 C13.0584616,13.6242324 13.3003556,13.5384544 13.4753427,13.3668984 C13.6503299,13.1953424 13.7378234,12.9585951 13.7378234,12.6566565 L13.7378234,7.49968276 C13.7378234,7.19774418 13.6503299,6.96099688 13.4753427,6.78944087 C13.3003556,6.61788486 13.0584616,6.53210685 12.7496608,6.53210685 L7.65444721,6.53210685 C7.33878414,6.53210685 7.09345904,6.61788486 6.91847191,6.78944087 C6.74348478,6.96099688 6.65599121,7.19774418 6.65599121,7.49968276 L6.65599121,12.6566565 C6.65599121,12.9585951 6.74348478,13.1953424 6.91847191,13.3668984 C7.09345904,13.5384544 7.33878414,13.6242324 7.65444721,13.6242324 Z' id='shape' fill='%23FF3B30' fill-rule='nonzero'%3E%3C/path%3E %3C/g%3E %3C/svg%3E");
- height: 21px;
-}
-/* list */
-ol:not(.options), ul:not(.options) {
- padding-inline-start: 2em !important;
-}
-
-/* 亮色(默认) */
-#chuanhu_chatbot {
- background-color: var(--chatbot-color-light) !important;
- color: #000000 !important;
-}
-[data-testid = "bot"] {
- background-color: #FFFFFF !important;
-}
-[data-testid = "user"] {
- background-color: #95EC69 !important;
-}
-/* 暗色 */
-.dark #chuanhu_chatbot {
- background-color: var(--chatbot-color-dark) !important;
- color: #FFFFFF !important;
-}
-.dark [data-testid = "bot"] {
- background-color: #2C2C2C !important;
-}
-.dark [data-testid = "user"] {
- background-color: #26B561 !important;
-}
-
-/* 屏幕宽度大于等于500px的设备 */
-/* update on 2023.4.8: 高度的细致调整已写入JavaScript */
-@media screen and (min-width: 500px) {
- #chuanhu_chatbot {
- height: calc(100vh - 200px);
- }
- #chuanhu_chatbot .wrap {
- max-height: calc(100vh - 200px - var(--line-sm)*1rem - 2*var(--block-label-margin) );
- }
-}
-/* 屏幕宽度小于500px的设备 */
-@media screen and (max-width: 499px) {
- #chuanhu_chatbot {
- height: calc(100vh - 140px);
- }
- #chuanhu_chatbot .wrap {
- max-height: calc(100vh - 140px - var(--line-sm)*1rem - 2*var(--block-label-margin) );
- }
- [data-testid = "bot"] {
- max-width: 98% !important;
- }
- #app_title h1{
- letter-spacing: -1px; font-size: 22px;
- }
-}
-/* 对话气泡 */
-[class *= "message"] {
- border-radius: var(--radius-xl) !important;
- border: none;
- padding: var(--spacing-xl) !important;
- font-size: var(--text-md) !important;
- line-height: var(--line-md) !important;
- min-height: calc(var(--text-md)*var(--line-md) + 2*var(--spacing-xl));
- min-width: calc(var(--text-md)*var(--line-md) + 2*var(--spacing-xl));
-}
-[data-testid = "bot"] {
- max-width: 85%;
- border-bottom-left-radius: 0 !important;
-}
-[data-testid = "user"] {
- max-width: 85%;
- width: auto !important;
- border-bottom-right-radius: 0 !important;
-}
-/* 表格 */
-table {
- margin: 1em 0;
- border-collapse: collapse;
- empty-cells: show;
-}
-td,th {
- border: 1.2px solid var(--border-color-primary) !important;
- padding: 0.2em;
-}
-thead {
- background-color: rgba(175,184,193,0.2);
-}
-thead th {
- padding: .5em .2em;
-}
-/* 行内代码 */
-code {
- display: inline;
- white-space: break-spaces;
- border-radius: 6px;
- margin: 0 2px 0 2px;
- padding: .2em .4em .1em .4em;
- background-color: rgba(175,184,193,0.2);
-}
-/* 代码块 */
-pre code {
- display: block;
- overflow: auto;
- white-space: pre;
- background-color: hsla(0, 0%, 0%, 80%)!important;
- border-radius: 10px;
- padding: 1.4em 1.2em 0em 1.4em;
- margin: 1.2em 2em 1.2em 0.5em;
- color: #FFF;
- box-shadow: 6px 6px 16px hsla(0, 0%, 0%, 0.2);
-}
-/* 代码高亮样式 */
-.highlight .hll { background-color: #49483e }
-.highlight .c { color: #75715e } /* Comment */
-.highlight .err { color: #960050; background-color: #1e0010 } /* Error */
-.highlight .k { color: #66d9ef } /* Keyword */
-.highlight .l { color: #ae81ff } /* Literal */
-.highlight .n { color: #f8f8f2 } /* Name */
-.highlight .o { color: #f92672 } /* Operator */
-.highlight .p { color: #f8f8f2 } /* Punctuation */
-.highlight .ch { color: #75715e } /* Comment.Hashbang */
-.highlight .cm { color: #75715e } /* Comment.Multiline */
-.highlight .cp { color: #75715e } /* Comment.Preproc */
-.highlight .cpf { color: #75715e } /* Comment.PreprocFile */
-.highlight .c1 { color: #75715e } /* Comment.Single */
-.highlight .cs { color: #75715e } /* Comment.Special */
-.highlight .gd { color: #f92672 } /* Generic.Deleted */
-.highlight .ge { font-style: italic } /* Generic.Emph */
-.highlight .gi { color: #a6e22e } /* Generic.Inserted */
-.highlight .gs { font-weight: bold } /* Generic.Strong */
-.highlight .gu { color: #75715e } /* Generic.Subheading */
-.highlight .kc { color: #66d9ef } /* Keyword.Constant */
-.highlight .kd { color: #66d9ef } /* Keyword.Declaration */
-.highlight .kn { color: #f92672 } /* Keyword.Namespace */
-.highlight .kp { color: #66d9ef } /* Keyword.Pseudo */
-.highlight .kr { color: #66d9ef } /* Keyword.Reserved */
-.highlight .kt { color: #66d9ef } /* Keyword.Type */
-.highlight .ld { color: #e6db74 } /* Literal.Date */
-.highlight .m { color: #ae81ff } /* Literal.Number */
-.highlight .s { color: #e6db74 } /* Literal.String */
-.highlight .na { color: #a6e22e } /* Name.Attribute */
-.highlight .nb { color: #f8f8f2 } /* Name.Builtin */
-.highlight .nc { color: #a6e22e } /* Name.Class */
-.highlight .no { color: #66d9ef } /* Name.Constant */
-.highlight .nd { color: #a6e22e } /* Name.Decorator */
-.highlight .ni { color: #f8f8f2 } /* Name.Entity */
-.highlight .ne { color: #a6e22e } /* Name.Exception */
-.highlight .nf { color: #a6e22e } /* Name.Function */
-.highlight .nl { color: #f8f8f2 } /* Name.Label */
-.highlight .nn { color: #f8f8f2 } /* Name.Namespace */
-.highlight .nx { color: #a6e22e } /* Name.Other */
-.highlight .py { color: #f8f8f2 } /* Name.Property */
-.highlight .nt { color: #f92672 } /* Name.Tag */
-.highlight .nv { color: #f8f8f2 } /* Name.Variable */
-.highlight .ow { color: #f92672 } /* Operator.Word */
-.highlight .w { color: #f8f8f2 } /* Text.Whitespace */
-.highlight .mb { color: #ae81ff } /* Literal.Number.Bin */
-.highlight .mf { color: #ae81ff } /* Literal.Number.Float */
-.highlight .mh { color: #ae81ff } /* Literal.Number.Hex */
-.highlight .mi { color: #ae81ff } /* Literal.Number.Integer */
-.highlight .mo { color: #ae81ff } /* Literal.Number.Oct */
-.highlight .sa { color: #e6db74 } /* Literal.String.Affix */
-.highlight .sb { color: #e6db74 } /* Literal.String.Backtick */
-.highlight .sc { color: #e6db74 } /* Literal.String.Char */
-.highlight .dl { color: #e6db74 } /* Literal.String.Delimiter */
-.highlight .sd { color: #e6db74 } /* Literal.String.Doc */
-.highlight .s2 { color: #e6db74 } /* Literal.String.Double */
-.highlight .se { color: #ae81ff } /* Literal.String.Escape */
-.highlight .sh { color: #e6db74 } /* Literal.String.Heredoc */
-.highlight .si { color: #e6db74 } /* Literal.String.Interpol */
-.highlight .sx { color: #e6db74 } /* Literal.String.Other */
-.highlight .sr { color: #e6db74 } /* Literal.String.Regex */
-.highlight .s1 { color: #e6db74 } /* Literal.String.Single */
-.highlight .ss { color: #e6db74 } /* Literal.String.Symbol */
-.highlight .bp { color: #f8f8f2 } /* Name.Builtin.Pseudo */
-.highlight .fm { color: #a6e22e } /* Name.Function.Magic */
-.highlight .vc { color: #f8f8f2 } /* Name.Variable.Class */
-.highlight .vg { color: #f8f8f2 } /* Name.Variable.Global */
-.highlight .vi { color: #f8f8f2 } /* Name.Variable.Instance */
-.highlight .vm { color: #f8f8f2 } /* Name.Variable.Magic */
-.highlight .il { color: #ae81ff } /* Literal.Number.Integer.Long */
diff --git a/spaces/Detomo/ai-comic-generation/src/components/ui/toast.tsx b/spaces/Detomo/ai-comic-generation/src/components/ui/toast.tsx
deleted file mode 100644
index 94b1e9a1d3a82fe1beea6e931c4887e2260371cd..0000000000000000000000000000000000000000
--- a/spaces/Detomo/ai-comic-generation/src/components/ui/toast.tsx
+++ /dev/null
@@ -1,127 +0,0 @@
-import * as React from "react"
-import * as ToastPrimitives from "@radix-ui/react-toast"
-import { cva, type VariantProps } from "class-variance-authority"
-import { X } from "lucide-react"
-
-import { cn } from "@/lib/utils"
-
-const ToastProvider = ToastPrimitives.Provider
-
-const ToastViewport = React.forwardRef<
- React.ElementRef,
- React.ComponentPropsWithoutRef
->(({ className, ...props }, ref) => (
-
-))
-ToastViewport.displayName = ToastPrimitives.Viewport.displayName
-
-const toastVariants = cva(
- "group pointer-events-auto relative flex w-full items-center justify-between space-x-4 overflow-hidden rounded-md border border-stone-200 p-6 pr-8 shadow-lg transition-all data-[swipe=cancel]:translate-x-0 data-[swipe=end]:translate-x-[var(--radix-toast-swipe-end-x)] data-[swipe=move]:translate-x-[var(--radix-toast-swipe-move-x)] data-[swipe=move]:transition-none data-[state=open]:animate-in data-[state=closed]:animate-out data-[swipe=end]:animate-out data-[state=closed]:fade-out-80 data-[state=closed]:slide-out-to-right-full data-[state=open]:slide-in-from-top-full data-[state=open]:sm:slide-in-from-bottom-full dark:border-stone-800",
- {
- variants: {
- variant: {
- default: "border bg-white text-stone-950 dark:bg-stone-950 dark:text-stone-50",
- destructive:
- "destructive group border-red-500 bg-red-500 text-stone-50 dark:border-red-900 dark:bg-red-900 dark:text-stone-50",
- },
- },
- defaultVariants: {
- variant: "default",
- },
- }
-)
-
-const Toast = React.forwardRef<
- React.ElementRef,
- React.ComponentPropsWithoutRef &
- VariantProps
->(({ className, variant, ...props }, ref) => {
- return (
-
- )
-})
-Toast.displayName = ToastPrimitives.Root.displayName
-
-const ToastAction = React.forwardRef<
- React.ElementRef,
- React.ComponentPropsWithoutRef
->(({ className, ...props }, ref) => (
-
-))
-ToastAction.displayName = ToastPrimitives.Action.displayName
-
-const ToastClose = React.forwardRef<
- React.ElementRef,
- React.ComponentPropsWithoutRef
->(({ className, ...props }, ref) => (
-
-
-
-))
-ToastClose.displayName = ToastPrimitives.Close.displayName
-
-const ToastTitle = React.forwardRef<
- React.ElementRef,
- React.ComponentPropsWithoutRef
->(({ className, ...props }, ref) => (
-
-))
-ToastTitle.displayName = ToastPrimitives.Title.displayName
-
-const ToastDescription = React.forwardRef<
- React.ElementRef,
- React.ComponentPropsWithoutRef
->(({ className, ...props }, ref) => (
-
-))
-ToastDescription.displayName = ToastPrimitives.Description.displayName
-
-type ToastProps = React.ComponentPropsWithoutRef
-
-type ToastActionElement = React.ReactElement
-
-export {
- type ToastProps,
- type ToastActionElement,
- ToastProvider,
- ToastViewport,
- Toast,
- ToastTitle,
- ToastDescription,
- ToastClose,
- ToastAction,
-}
diff --git a/spaces/DragGan/DragGan/scripts/gui.sh b/spaces/DragGan/DragGan/scripts/gui.sh
deleted file mode 100644
index 5eb68e3b7d2e51b8781fa2e638c7005f0c994246..0000000000000000000000000000000000000000
--- a/spaces/DragGan/DragGan/scripts/gui.sh
+++ /dev/null
@@ -1,11 +0,0 @@
-python visualizer_drag.py \
- checkpoints/stylegan2_lions_512_pytorch.pkl \
- checkpoints/stylegan2-ffhq-512x512.pkl \
- checkpoints/stylegan2-afhqcat-512x512.pkl \
- checkpoints/stylegan2-car-config-f.pkl \
- checkpoints/stylegan2_dogs_1024_pytorch.pkl \
- checkpoints/stylegan2_horses_256_pytorch.pkl \
- checkpoints/stylegan2-cat-config-f.pkl \
- checkpoints/stylegan2_elephants_512_pytorch.pkl \
- checkpoints/stylegan_human_v2_512.pkl \
- checkpoints/stylegan2-lhq-256x256.pkl
diff --git a/spaces/EduardoPacheco/DINOv2-Features-Visualization/README.md b/spaces/EduardoPacheco/DINOv2-Features-Visualization/README.md
deleted file mode 100644
index 0a13b97468edabaeeb145c18882a49186a1d1e5c..0000000000000000000000000000000000000000
--- a/spaces/EduardoPacheco/DINOv2-Features-Visualization/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: DINOv2 Features Visualization
-emoji: 🚀
-colorFrom: red
-colorTo: purple
-sdk: gradio
-sdk_version: 3.29.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/ElainaFanBoy/MusicGen/audiocraft/modules/conv.py b/spaces/ElainaFanBoy/MusicGen/audiocraft/modules/conv.py
deleted file mode 100644
index 972938ab84712eb06e1b10cea25444eee51d6637..0000000000000000000000000000000000000000
--- a/spaces/ElainaFanBoy/MusicGen/audiocraft/modules/conv.py
+++ /dev/null
@@ -1,245 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-import math
-import typing as tp
-import warnings
-
-import torch
-from torch import nn
-from torch.nn import functional as F
-from torch.nn.utils import spectral_norm, weight_norm
-
-
-CONV_NORMALIZATIONS = frozenset(['none', 'weight_norm', 'spectral_norm',
- 'time_group_norm'])
-
-
-def apply_parametrization_norm(module: nn.Module, norm: str = 'none'):
- assert norm in CONV_NORMALIZATIONS
- if norm == 'weight_norm':
- return weight_norm(module)
- elif norm == 'spectral_norm':
- return spectral_norm(module)
- else:
- # We already check was in CONV_NORMALIZATION, so any other choice
- # doesn't need reparametrization.
- return module
-
-
-def get_norm_module(module: nn.Module, causal: bool = False, norm: str = 'none', **norm_kwargs):
- """Return the proper normalization module. If causal is True, this will ensure the returned
- module is causal, or return an error if the normalization doesn't support causal evaluation.
- """
- assert norm in CONV_NORMALIZATIONS
- if norm == 'time_group_norm':
- if causal:
- raise ValueError("GroupNorm doesn't support causal evaluation.")
- assert isinstance(module, nn.modules.conv._ConvNd)
- return nn.GroupNorm(1, module.out_channels, **norm_kwargs)
- else:
- return nn.Identity()
-
-
-def get_extra_padding_for_conv1d(x: torch.Tensor, kernel_size: int, stride: int,
- padding_total: int = 0) -> int:
- """See `pad_for_conv1d`.
- """
- length = x.shape[-1]
- n_frames = (length - kernel_size + padding_total) / stride + 1
- ideal_length = (math.ceil(n_frames) - 1) * stride + (kernel_size - padding_total)
- return ideal_length - length
-
-
-def pad_for_conv1d(x: torch.Tensor, kernel_size: int, stride: int, padding_total: int = 0):
- """Pad for a convolution to make sure that the last window is full.
- Extra padding is added at the end. This is required to ensure that we can rebuild
- an output of the same length, as otherwise, even with padding, some time steps
- might get removed.
- For instance, with total padding = 4, kernel size = 4, stride = 2:
- 0 0 1 2 3 4 5 0 0 # (0s are padding)
- 1 2 3 # (output frames of a convolution, last 0 is never used)
- 0 0 1 2 3 4 5 0 # (output of tr. conv., but pos. 5 is going to get removed as padding)
- 1 2 3 4 # once you removed padding, we are missing one time step !
- """
- extra_padding = get_extra_padding_for_conv1d(x, kernel_size, stride, padding_total)
- return F.pad(x, (0, extra_padding))
-
-
-def pad1d(x: torch.Tensor, paddings: tp.Tuple[int, int], mode: str = 'constant', value: float = 0.):
- """Tiny wrapper around F.pad, just to allow for reflect padding on small input.
- If this is the case, we insert extra 0 padding to the right before the reflection happen.
- """
- length = x.shape[-1]
- padding_left, padding_right = paddings
- assert padding_left >= 0 and padding_right >= 0, (padding_left, padding_right)
- if mode == 'reflect':
- max_pad = max(padding_left, padding_right)
- extra_pad = 0
- if length <= max_pad:
- extra_pad = max_pad - length + 1
- x = F.pad(x, (0, extra_pad))
- padded = F.pad(x, paddings, mode, value)
- end = padded.shape[-1] - extra_pad
- return padded[..., :end]
- else:
- return F.pad(x, paddings, mode, value)
-
-
-def unpad1d(x: torch.Tensor, paddings: tp.Tuple[int, int]):
- """Remove padding from x, handling properly zero padding. Only for 1d!
- """
- padding_left, padding_right = paddings
- assert padding_left >= 0 and padding_right >= 0, (padding_left, padding_right)
- assert (padding_left + padding_right) <= x.shape[-1]
- end = x.shape[-1] - padding_right
- return x[..., padding_left: end]
-
-
-class NormConv1d(nn.Module):
- """Wrapper around Conv1d and normalization applied to this conv
- to provide a uniform interface across normalization approaches.
- """
- def __init__(self, *args, causal: bool = False, norm: str = 'none',
- norm_kwargs: tp.Dict[str, tp.Any] = {}, **kwargs):
- super().__init__()
- self.conv = apply_parametrization_norm(nn.Conv1d(*args, **kwargs), norm)
- self.norm = get_norm_module(self.conv, causal, norm, **norm_kwargs)
- self.norm_type = norm
-
- def forward(self, x):
- x = self.conv(x)
- x = self.norm(x)
- return x
-
-
-class NormConv2d(nn.Module):
- """Wrapper around Conv2d and normalization applied to this conv
- to provide a uniform interface across normalization approaches.
- """
- def __init__(self, *args, norm: str = 'none', norm_kwargs: tp.Dict[str, tp.Any] = {}, **kwargs):
- super().__init__()
- self.conv = apply_parametrization_norm(nn.Conv2d(*args, **kwargs), norm)
- self.norm = get_norm_module(self.conv, causal=False, norm=norm, **norm_kwargs)
- self.norm_type = norm
-
- def forward(self, x):
- x = self.conv(x)
- x = self.norm(x)
- return x
-
-
-class NormConvTranspose1d(nn.Module):
- """Wrapper around ConvTranspose1d and normalization applied to this conv
- to provide a uniform interface across normalization approaches.
- """
- def __init__(self, *args, causal: bool = False, norm: str = 'none',
- norm_kwargs: tp.Dict[str, tp.Any] = {}, **kwargs):
- super().__init__()
- self.convtr = apply_parametrization_norm(nn.ConvTranspose1d(*args, **kwargs), norm)
- self.norm = get_norm_module(self.convtr, causal, norm, **norm_kwargs)
- self.norm_type = norm
-
- def forward(self, x):
- x = self.convtr(x)
- x = self.norm(x)
- return x
-
-
-class NormConvTranspose2d(nn.Module):
- """Wrapper around ConvTranspose2d and normalization applied to this conv
- to provide a uniform interface across normalization approaches.
- """
- def __init__(self, *args, norm: str = 'none', norm_kwargs: tp.Dict[str, tp.Any] = {}, **kwargs):
- super().__init__()
- self.convtr = apply_parametrization_norm(nn.ConvTranspose2d(*args, **kwargs), norm)
- self.norm = get_norm_module(self.convtr, causal=False, norm=norm, **norm_kwargs)
-
- def forward(self, x):
- x = self.convtr(x)
- x = self.norm(x)
- return x
-
-
-class StreamableConv1d(nn.Module):
- """Conv1d with some builtin handling of asymmetric or causal padding
- and normalization.
- """
- def __init__(self, in_channels: int, out_channels: int,
- kernel_size: int, stride: int = 1, dilation: int = 1,
- groups: int = 1, bias: bool = True, causal: bool = False,
- norm: str = 'none', norm_kwargs: tp.Dict[str, tp.Any] = {},
- pad_mode: str = 'reflect'):
- super().__init__()
- # warn user on unusual setup between dilation and stride
- if stride > 1 and dilation > 1:
- warnings.warn('StreamableConv1d has been initialized with stride > 1 and dilation > 1'
- f' (kernel_size={kernel_size} stride={stride}, dilation={dilation}).')
- self.conv = NormConv1d(in_channels, out_channels, kernel_size, stride,
- dilation=dilation, groups=groups, bias=bias, causal=causal,
- norm=norm, norm_kwargs=norm_kwargs)
- self.causal = causal
- self.pad_mode = pad_mode
-
- def forward(self, x):
- B, C, T = x.shape
- kernel_size = self.conv.conv.kernel_size[0]
- stride = self.conv.conv.stride[0]
- dilation = self.conv.conv.dilation[0]
- kernel_size = (kernel_size - 1) * dilation + 1 # effective kernel size with dilations
- padding_total = kernel_size - stride
- extra_padding = get_extra_padding_for_conv1d(x, kernel_size, stride, padding_total)
- if self.causal:
- # Left padding for causal
- x = pad1d(x, (padding_total, extra_padding), mode=self.pad_mode)
- else:
- # Asymmetric padding required for odd strides
- padding_right = padding_total // 2
- padding_left = padding_total - padding_right
- x = pad1d(x, (padding_left, padding_right + extra_padding), mode=self.pad_mode)
- return self.conv(x)
-
-
-class StreamableConvTranspose1d(nn.Module):
- """ConvTranspose1d with some builtin handling of asymmetric or causal padding
- and normalization.
- """
- def __init__(self, in_channels: int, out_channels: int,
- kernel_size: int, stride: int = 1, causal: bool = False,
- norm: str = 'none', trim_right_ratio: float = 1.,
- norm_kwargs: tp.Dict[str, tp.Any] = {}):
- super().__init__()
- self.convtr = NormConvTranspose1d(in_channels, out_channels, kernel_size, stride,
- causal=causal, norm=norm, norm_kwargs=norm_kwargs)
- self.causal = causal
- self.trim_right_ratio = trim_right_ratio
- assert self.causal or self.trim_right_ratio == 1., \
- "`trim_right_ratio` != 1.0 only makes sense for causal convolutions"
- assert self.trim_right_ratio >= 0. and self.trim_right_ratio <= 1.
-
- def forward(self, x):
- kernel_size = self.convtr.convtr.kernel_size[0]
- stride = self.convtr.convtr.stride[0]
- padding_total = kernel_size - stride
-
- y = self.convtr(x)
-
- # We will only trim fixed padding. Extra padding from `pad_for_conv1d` would be
- # removed at the very end, when keeping only the right length for the output,
- # as removing it here would require also passing the length at the matching layer
- # in the encoder.
- if self.causal:
- # Trim the padding on the right according to the specified ratio
- # if trim_right_ratio = 1.0, trim everything from right
- padding_right = math.ceil(padding_total * self.trim_right_ratio)
- padding_left = padding_total - padding_right
- y = unpad1d(y, (padding_left, padding_right))
- else:
- # Asymmetric padding required for odd strides
- padding_right = padding_total // 2
- padding_left = padding_total - padding_right
- y = unpad1d(y, (padding_left, padding_right))
- return y
diff --git a/spaces/EleutherAI/magma/example_inference.py b/spaces/EleutherAI/magma/example_inference.py
deleted file mode 100644
index b48ec9c7d64f69c93a225fd530fbb3730e15c86e..0000000000000000000000000000000000000000
--- a/spaces/EleutherAI/magma/example_inference.py
+++ /dev/null
@@ -1,27 +0,0 @@
-from magma import Magma
-from magma.image_input import ImageInput
-
-model = Magma.from_checkpoint(
- config_path = "configs/MAGMA_v1.yml",
- checkpoint_path = "./mp_rank_00_model_states.pt",
- device = 'cuda:0'
-)
-
-inputs =[
- ## supports urls and path/to/image
- ImageInput('https://www.art-prints-on-demand.com/kunst/thomas_cole/woods_hi.jpg'),
- 'Describe the painting:'
-]
-
-## returns a tensor of shape: (1, 149, 4096)
-embeddings = model.preprocess_inputs(inputs)
-
-## returns a list of length embeddings.shape[0] (batch size)
-output = model.generate(
- embeddings = embeddings,
- max_steps = 6,
- temperature = 0.7,
- top_k = 0,
-)
-
-print(output[0]) ## A cabin on a lake
diff --git a/spaces/EronSamez/RVC_HFmeu/demucs/train.py b/spaces/EronSamez/RVC_HFmeu/demucs/train.py
deleted file mode 100644
index 6bd221279dc986a6df1a8d7b4d4444bb822a1cb3..0000000000000000000000000000000000000000
--- a/spaces/EronSamez/RVC_HFmeu/demucs/train.py
+++ /dev/null
@@ -1,127 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-import sys
-
-import tqdm
-from torch.utils.data import DataLoader
-from torch.utils.data.distributed import DistributedSampler
-
-from .utils import apply_model, average_metric, center_trim
-
-
-def train_model(epoch,
- dataset,
- model,
- criterion,
- optimizer,
- augment,
- quantizer=None,
- diffq=0,
- repeat=1,
- device="cpu",
- seed=None,
- workers=4,
- world_size=1,
- batch_size=16):
-
- if world_size > 1:
- sampler = DistributedSampler(dataset)
- sampler_epoch = epoch * repeat
- if seed is not None:
- sampler_epoch += seed * 1000
- sampler.set_epoch(sampler_epoch)
- batch_size //= world_size
- loader = DataLoader(dataset, batch_size=batch_size, sampler=sampler, num_workers=workers)
- else:
- loader = DataLoader(dataset, batch_size=batch_size, num_workers=workers, shuffle=True)
- current_loss = 0
- model_size = 0
- for repetition in range(repeat):
- tq = tqdm.tqdm(loader,
- ncols=120,
- desc=f"[{epoch:03d}] train ({repetition + 1}/{repeat})",
- leave=False,
- file=sys.stdout,
- unit=" batch")
- total_loss = 0
- for idx, sources in enumerate(tq):
- if len(sources) < batch_size:
- # skip uncomplete batch for augment.Remix to work properly
- continue
- sources = sources.to(device)
- sources = augment(sources)
- mix = sources.sum(dim=1)
-
- estimates = model(mix)
- sources = center_trim(sources, estimates)
- loss = criterion(estimates, sources)
- model_size = 0
- if quantizer is not None:
- model_size = quantizer.model_size()
-
- train_loss = loss + diffq * model_size
- train_loss.backward()
- grad_norm = 0
- for p in model.parameters():
- if p.grad is not None:
- grad_norm += p.grad.data.norm()**2
- grad_norm = grad_norm**0.5
- optimizer.step()
- optimizer.zero_grad()
-
- if quantizer is not None:
- model_size = model_size.item()
-
- total_loss += loss.item()
- current_loss = total_loss / (1 + idx)
- tq.set_postfix(loss=f"{current_loss:.4f}", ms=f"{model_size:.2f}",
- grad=f"{grad_norm:.5f}")
-
- # free some space before next round
- del sources, mix, estimates, loss, train_loss
-
- if world_size > 1:
- sampler.epoch += 1
-
- if world_size > 1:
- current_loss = average_metric(current_loss)
- return current_loss, model_size
-
-
-def validate_model(epoch,
- dataset,
- model,
- criterion,
- device="cpu",
- rank=0,
- world_size=1,
- shifts=0,
- overlap=0.25,
- split=False):
- indexes = range(rank, len(dataset), world_size)
- tq = tqdm.tqdm(indexes,
- ncols=120,
- desc=f"[{epoch:03d}] valid",
- leave=False,
- file=sys.stdout,
- unit=" track")
- current_loss = 0
- for index in tq:
- streams = dataset[index]
- # first five minutes to avoid OOM on --upsample models
- streams = streams[..., :15_000_000]
- streams = streams.to(device)
- sources = streams[1:]
- mix = streams[0]
- estimates = apply_model(model, mix, shifts=shifts, split=split, overlap=overlap)
- loss = criterion(estimates, sources)
- current_loss += loss.item() / len(indexes)
- del estimates, streams, sources
-
- if world_size > 1:
- current_loss = average_metric(current_loss, len(indexes))
- return current_loss
diff --git a/spaces/FarziBuilder/WORK/app.py b/spaces/FarziBuilder/WORK/app.py
deleted file mode 100644
index 14208dc798ff681aae401be394f33d0f06067fb3..0000000000000000000000000000000000000000
--- a/spaces/FarziBuilder/WORK/app.py
+++ /dev/null
@@ -1,62 +0,0 @@
-# This Python 3 environment comes with many helpful analytics libraries installed
-# It is defined by the kaggle/python Docker image: https://github.com/kaggle/docker-python
-# For example, here's several helpful packages to load
-
-import numpy as np # linear algebra
-import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
-
-# Input data files are available in the read-only "../input/" directory
-# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
-
-import os
-for dirname, _, filenames in os.walk('/kaggle/input'):
- for filename in filenames:
- print(os.path.join(dirname, filename))
-
-# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
-# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
-
-#|default_exp app
-
-#|export
-#pip install fastbook
-import fastbook
-from fastbook import *
-#pip install fastai
-from fastai.vision.widgets import *
-#pip install gradio
-import gradio as gr
-
-import IPython
-from IPython.display import display
-from PIL import Image
-
-import pathlib
-#temp = pathlib.PosixPath
-#pathlib.PosixPath = pathlib.WindowsPath
-plt = platform.system()
-if plt == 'Linux': pathlib.WindowsPath = pathlib.PosixPath
-
-def search_images(term, max_images=999999):
- print(f"Searching for '{term}'")
- return search_images_ddg(term, max_images)
-
-learn = load_learner('model.pkl')
-
-breeds = ('Labrador Retrievers','German Shepherds','Golden Retrievers','French Bulldogs','Bulldogs','Beagles','Poodles','Rottweilers','Chihuahua')
-
-def classify_image(img):
- pred,idx,probs = learn.predict(img)
- #return dict(zip(breeds, map(float,probs)))
- return "This is " + pred
-
-image = gr.components.Image()
-label = gr.components.Label()
-
-examples = ['dog.jpg','labrador.jpeg','dunno.jpg']
-
-for x in examples:
- Image.open(x)
-
-intf = gr.Interface(fn=classify_image, inputs=image, outputs=label, examples=examples)
-intf.launch(inline=False)
\ No newline at end of file
diff --git a/spaces/Fengbinbin/gpt-academic/crazy_functions/test_project/cpp/libJPG/jpge.h b/spaces/Fengbinbin/gpt-academic/crazy_functions/test_project/cpp/libJPG/jpge.h
deleted file mode 100644
index a46c805ab80aab491f7f9508b3a008b149866bee..0000000000000000000000000000000000000000
--- a/spaces/Fengbinbin/gpt-academic/crazy_functions/test_project/cpp/libJPG/jpge.h
+++ /dev/null
@@ -1,172 +0,0 @@
-
-// jpge.h - C++ class for JPEG compression.
-// Public domain, Rich Geldreich
-// Alex Evans: Added RGBA support, linear memory allocator.
-#ifndef JPEG_ENCODER_H
-#define JPEG_ENCODER_H
-
-#include
-
-namespace jpge
-{
- typedef unsigned char uint8;
- typedef signed short int16;
- typedef signed int int32;
- typedef unsigned short uint16;
- typedef unsigned int uint32;
- typedef unsigned int uint;
-
- // JPEG chroma subsampling factors. Y_ONLY (grayscale images) and H2V2 (color images) are the most common.
- enum subsampling_t { Y_ONLY = 0, H1V1 = 1, H2V1 = 2, H2V2 = 3 };
-
- // JPEG compression parameters structure.
- struct params
- {
- inline params() : m_quality(85), m_subsampling(H2V2), m_no_chroma_discrim_flag(false), m_two_pass_flag(false) { }
-
- inline bool check_valid() const
- {
- if ((m_quality < 1) || (m_quality > 100)) return false;
- if ((uint)m_subsampling > (uint)H2V2) return false;
- return true;
- }
-
- // Quality: 1-100, higher is better. Typical values are around 50-95.
- int m_quality;
-
- // m_subsampling:
- // 0 = Y (grayscale) only
- // 1 = YCbCr, no subsampling (H1V1, YCbCr 1x1x1, 3 blocks per MCU)
- // 2 = YCbCr, H2V1 subsampling (YCbCr 2x1x1, 4 blocks per MCU)
- // 3 = YCbCr, H2V2 subsampling (YCbCr 4x1x1, 6 blocks per MCU-- very common)
- subsampling_t m_subsampling;
-
- // Disables CbCr discrimination - only intended for testing.
- // If true, the Y quantization table is also used for the CbCr channels.
- bool m_no_chroma_discrim_flag;
-
- bool m_two_pass_flag;
- };
-
- // Writes JPEG image to a file.
- // num_channels must be 1 (Y) or 3 (RGB), image pitch must be width*num_channels.
- bool compress_image_to_jpeg_file(const char *pFilename, int64_t width, int64_t height, int64_t num_channels, const uint8 *pImage_data, const params &comp_params = params());
-
- // Writes JPEG image to memory buffer.
- // On entry, buf_size is the size of the output buffer pointed at by pBuf, which should be at least ~1024 bytes.
- // If return value is true, buf_size will be set to the size of the compressed data.
- bool compress_image_to_jpeg_file_in_memory(void *pBuf, int64_t &buf_size, int64_t width, int64_t height, int64_t num_channels, const uint8 *pImage_data, const params &comp_params = params());
-
- // Output stream abstract class - used by the jpeg_encoder class to write to the output stream.
- // put_buf() is generally called with len==JPGE_OUT_BUF_SIZE bytes, but for headers it'll be called with smaller amounts.
- class output_stream
- {
- public:
- virtual ~output_stream() { };
- virtual bool put_buf(const void* Pbuf, int64_t len) = 0;
- template inline bool put_obj(const T& obj) { return put_buf(&obj, sizeof(T)); }
- };
-
- // Lower level jpeg_encoder class - useful if more control is needed than the above helper functions.
- class jpeg_encoder
- {
- public:
- jpeg_encoder();
- ~jpeg_encoder();
-
- // Initializes the compressor.
- // pStream: The stream object to use for writing compressed data.
- // params - Compression parameters structure, defined above.
- // width, height - Image dimensions.
- // channels - May be 1, or 3. 1 indicates grayscale, 3 indicates RGB source data.
- // Returns false on out of memory or if a stream write fails.
- bool init(output_stream *pStream, int64_t width, int64_t height, int64_t src_channels, const params &comp_params = params());
-
- const params &get_params() const { return m_params; }
-
- // Deinitializes the compressor, freeing any allocated memory. May be called at any time.
- void deinit();
-
- uint get_total_passes() const { return m_params.m_two_pass_flag ? 2 : 1; }
- inline uint get_cur_pass() { return m_pass_num; }
-
- // Call this method with each source scanline.
- // width * src_channels bytes per scanline is expected (RGB or Y format).
- // You must call with NULL after all scanlines are processed to finish compression.
- // Returns false on out of memory or if a stream write fails.
- bool process_scanline(const void* pScanline);
-
- private:
- jpeg_encoder(const jpeg_encoder &);
- jpeg_encoder &operator =(const jpeg_encoder &);
-
- typedef int32 sample_array_t;
-
- output_stream *m_pStream;
- params m_params;
- uint8 m_num_components;
- uint8 m_comp_h_samp[3], m_comp_v_samp[3];
- int m_image_x, m_image_y, m_image_bpp, m_image_bpl;
- int m_image_x_mcu, m_image_y_mcu;
- int m_image_bpl_xlt, m_image_bpl_mcu;
- int m_mcus_per_row;
- int m_mcu_x, m_mcu_y;
- uint8 *m_mcu_lines[16];
- uint8 m_mcu_y_ofs;
- sample_array_t m_sample_array[64];
- int16 m_coefficient_array[64];
- int32 m_quantization_tables[2][64];
- uint m_huff_codes[4][256];
- uint8 m_huff_code_sizes[4][256];
- uint8 m_huff_bits[4][17];
- uint8 m_huff_val[4][256];
- uint32 m_huff_count[4][256];
- int m_last_dc_val[3];
- enum { JPGE_OUT_BUF_SIZE = 2048 };
- uint8 m_out_buf[JPGE_OUT_BUF_SIZE];
- uint8 *m_pOut_buf;
- uint m_out_buf_left;
- uint32 m_bit_buffer;
- uint m_bits_in;
- uint8 m_pass_num;
- bool m_all_stream_writes_succeeded;
-
- void optimize_huffman_table(int table_num, int table_len);
- void emit_byte(uint8 i);
- void emit_word(uint i);
- void emit_marker(int marker);
- void emit_jfif_app0();
- void emit_dqt();
- void emit_sof();
- void emit_dht(uint8 *bits, uint8 *val, int index, bool ac_flag);
- void emit_dhts();
- void emit_sos();
- void emit_markers();
- void compute_huffman_table(uint *codes, uint8 *code_sizes, uint8 *bits, uint8 *val);
- void compute_quant_table(int32 *dst, int16 *src);
- void adjust_quant_table(int32 *dst, int32 *src);
- void first_pass_init();
- bool second_pass_init();
- bool jpg_open(int p_x_res, int p_y_res, int src_channels);
- void load_block_8_8_grey(int x);
- void load_block_8_8(int x, int y, int c);
- void load_block_16_8(int x, int c);
- void load_block_16_8_8(int x, int c);
- void load_quantized_coefficients(int component_num);
- void flush_output_buffer();
- void put_bits(uint bits, uint len);
- void code_coefficients_pass_one(int component_num);
- void code_coefficients_pass_two(int component_num);
- void code_block(int component_num);
- void process_mcu_row();
- bool terminate_pass_one();
- bool terminate_pass_two();
- bool process_end_of_image();
- void load_mcu(const void* src);
- void clear();
- void init();
- };
-
-} // namespace jpge
-
-#endif // JPEG_ENCODER
\ No newline at end of file
diff --git a/spaces/FridaZuley/RVC_HFKawaii/demucs/pretrained.py b/spaces/FridaZuley/RVC_HFKawaii/demucs/pretrained.py
deleted file mode 100644
index 6aac5db100cc7a9084af96d2cd083f0c8fac473c..0000000000000000000000000000000000000000
--- a/spaces/FridaZuley/RVC_HFKawaii/demucs/pretrained.py
+++ /dev/null
@@ -1,107 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-# author: adefossez
-
-import logging
-
-from diffq import DiffQuantizer
-import torch.hub
-
-from .model import Demucs
-from .tasnet import ConvTasNet
-from .utils import set_state
-
-logger = logging.getLogger(__name__)
-ROOT = "https://dl.fbaipublicfiles.com/demucs/v3.0/"
-
-PRETRAINED_MODELS = {
- 'demucs': 'e07c671f',
- 'demucs48_hq': '28a1282c',
- 'demucs_extra': '3646af93',
- 'demucs_quantized': '07afea75',
- 'tasnet': 'beb46fac',
- 'tasnet_extra': 'df3777b2',
- 'demucs_unittest': '09ebc15f',
-}
-
-SOURCES = ["drums", "bass", "other", "vocals"]
-
-
-def get_url(name):
- sig = PRETRAINED_MODELS[name]
- return ROOT + name + "-" + sig[:8] + ".th"
-
-
-def is_pretrained(name):
- return name in PRETRAINED_MODELS
-
-
-def load_pretrained(name):
- if name == "demucs":
- return demucs(pretrained=True)
- elif name == "demucs48_hq":
- return demucs(pretrained=True, hq=True, channels=48)
- elif name == "demucs_extra":
- return demucs(pretrained=True, extra=True)
- elif name == "demucs_quantized":
- return demucs(pretrained=True, quantized=True)
- elif name == "demucs_unittest":
- return demucs_unittest(pretrained=True)
- elif name == "tasnet":
- return tasnet(pretrained=True)
- elif name == "tasnet_extra":
- return tasnet(pretrained=True, extra=True)
- else:
- raise ValueError(f"Invalid pretrained name {name}")
-
-
-def _load_state(name, model, quantizer=None):
- url = get_url(name)
- state = torch.hub.load_state_dict_from_url(url, map_location='cpu', check_hash=True)
- set_state(model, quantizer, state)
- if quantizer:
- quantizer.detach()
-
-
-def demucs_unittest(pretrained=True):
- model = Demucs(channels=4, sources=SOURCES)
- if pretrained:
- _load_state('demucs_unittest', model)
- return model
-
-
-def demucs(pretrained=True, extra=False, quantized=False, hq=False, channels=64):
- if not pretrained and (extra or quantized or hq):
- raise ValueError("if extra or quantized is True, pretrained must be True.")
- model = Demucs(sources=SOURCES, channels=channels)
- if pretrained:
- name = 'demucs'
- if channels != 64:
- name += str(channels)
- quantizer = None
- if sum([extra, quantized, hq]) > 1:
- raise ValueError("Only one of extra, quantized, hq, can be True.")
- if quantized:
- quantizer = DiffQuantizer(model, group_size=8, min_size=1)
- name += '_quantized'
- if extra:
- name += '_extra'
- if hq:
- name += '_hq'
- _load_state(name, model, quantizer)
- return model
-
-
-def tasnet(pretrained=True, extra=False):
- if not pretrained and extra:
- raise ValueError("if extra is True, pretrained must be True.")
- model = ConvTasNet(X=10, sources=SOURCES)
- if pretrained:
- name = 'tasnet'
- if extra:
- name = 'tasnet_extra'
- _load_state(name, model)
- return model
diff --git a/spaces/FridaZuley/RVC_HFKawaii/lib/uvr5_pack/lib_v5/layers_123812KB .py b/spaces/FridaZuley/RVC_HFKawaii/lib/uvr5_pack/lib_v5/layers_123812KB .py
deleted file mode 100644
index b82f06bb4993cd63f076e68d7e24185269b1bc42..0000000000000000000000000000000000000000
--- a/spaces/FridaZuley/RVC_HFKawaii/lib/uvr5_pack/lib_v5/layers_123812KB .py
+++ /dev/null
@@ -1,118 +0,0 @@
-import torch
-from torch import nn
-import torch.nn.functional as F
-
-from . import spec_utils
-
-
-class Conv2DBNActiv(nn.Module):
- def __init__(self, nin, nout, ksize=3, stride=1, pad=1, dilation=1, activ=nn.ReLU):
- super(Conv2DBNActiv, self).__init__()
- self.conv = nn.Sequential(
- nn.Conv2d(
- nin,
- nout,
- kernel_size=ksize,
- stride=stride,
- padding=pad,
- dilation=dilation,
- bias=False,
- ),
- nn.BatchNorm2d(nout),
- activ(),
- )
-
- def __call__(self, x):
- return self.conv(x)
-
-
-class SeperableConv2DBNActiv(nn.Module):
- def __init__(self, nin, nout, ksize=3, stride=1, pad=1, dilation=1, activ=nn.ReLU):
- super(SeperableConv2DBNActiv, self).__init__()
- self.conv = nn.Sequential(
- nn.Conv2d(
- nin,
- nin,
- kernel_size=ksize,
- stride=stride,
- padding=pad,
- dilation=dilation,
- groups=nin,
- bias=False,
- ),
- nn.Conv2d(nin, nout, kernel_size=1, bias=False),
- nn.BatchNorm2d(nout),
- activ(),
- )
-
- def __call__(self, x):
- return self.conv(x)
-
-
-class Encoder(nn.Module):
- def __init__(self, nin, nout, ksize=3, stride=1, pad=1, activ=nn.LeakyReLU):
- super(Encoder, self).__init__()
- self.conv1 = Conv2DBNActiv(nin, nout, ksize, 1, pad, activ=activ)
- self.conv2 = Conv2DBNActiv(nout, nout, ksize, stride, pad, activ=activ)
-
- def __call__(self, x):
- skip = self.conv1(x)
- h = self.conv2(skip)
-
- return h, skip
-
-
-class Decoder(nn.Module):
- def __init__(
- self, nin, nout, ksize=3, stride=1, pad=1, activ=nn.ReLU, dropout=False
- ):
- super(Decoder, self).__init__()
- self.conv = Conv2DBNActiv(nin, nout, ksize, 1, pad, activ=activ)
- self.dropout = nn.Dropout2d(0.1) if dropout else None
-
- def __call__(self, x, skip=None):
- x = F.interpolate(x, scale_factor=2, mode="bilinear", align_corners=True)
- if skip is not None:
- skip = spec_utils.crop_center(skip, x)
- x = torch.cat([x, skip], dim=1)
- h = self.conv(x)
-
- if self.dropout is not None:
- h = self.dropout(h)
-
- return h
-
-
-class ASPPModule(nn.Module):
- def __init__(self, nin, nout, dilations=(4, 8, 16), activ=nn.ReLU):
- super(ASPPModule, self).__init__()
- self.conv1 = nn.Sequential(
- nn.AdaptiveAvgPool2d((1, None)),
- Conv2DBNActiv(nin, nin, 1, 1, 0, activ=activ),
- )
- self.conv2 = Conv2DBNActiv(nin, nin, 1, 1, 0, activ=activ)
- self.conv3 = SeperableConv2DBNActiv(
- nin, nin, 3, 1, dilations[0], dilations[0], activ=activ
- )
- self.conv4 = SeperableConv2DBNActiv(
- nin, nin, 3, 1, dilations[1], dilations[1], activ=activ
- )
- self.conv5 = SeperableConv2DBNActiv(
- nin, nin, 3, 1, dilations[2], dilations[2], activ=activ
- )
- self.bottleneck = nn.Sequential(
- Conv2DBNActiv(nin * 5, nout, 1, 1, 0, activ=activ), nn.Dropout2d(0.1)
- )
-
- def forward(self, x):
- _, _, h, w = x.size()
- feat1 = F.interpolate(
- self.conv1(x), size=(h, w), mode="bilinear", align_corners=True
- )
- feat2 = self.conv2(x)
- feat3 = self.conv3(x)
- feat4 = self.conv4(x)
- feat5 = self.conv5(x)
- out = torch.cat((feat1, feat2, feat3, feat4, feat5), dim=1)
- bottle = self.bottleneck(out)
- return bottle
diff --git a/spaces/GaenKoki/voicevox/voicevox_engine/dev/synthesis_engine/mock.py b/spaces/GaenKoki/voicevox/voicevox_engine/dev/synthesis_engine/mock.py
deleted file mode 100644
index 3a1b47ac3ca86560a9cc4a379890a9c9609d1d4a..0000000000000000000000000000000000000000
--- a/spaces/GaenKoki/voicevox/voicevox_engine/dev/synthesis_engine/mock.py
+++ /dev/null
@@ -1,136 +0,0 @@
-from logging import getLogger
-from typing import Any, Dict, List, Optional
-
-import numpy as np
-from pyopenjtalk import tts
-from scipy.signal import resample
-
-from ...model import AccentPhrase, AudioQuery
-from ...synthesis_engine import SynthesisEngineBase
-from ...synthesis_engine.synthesis_engine import to_flatten_moras
-
-
-class MockSynthesisEngine(SynthesisEngineBase):
- """
- SynthesisEngine [Mock]
- """
-
- def __init__(
- self,
- speakers: str,
- supported_devices: Optional[str] = None,
- ):
- """
- __init__ [Mock]
- """
- super().__init__()
-
- self._speakers = speakers
- self._supported_devices = supported_devices
- self.default_sampling_rate = 24000
-
- @property
- def speakers(self) -> str:
- return self._speakers
-
- @property
- def supported_devices(self) -> Optional[str]:
- return self._supported_devices
-
- def replace_phoneme_length(
- self, accent_phrases: List[AccentPhrase], speaker_id: int
- ) -> List[AccentPhrase]:
- """
- replace_phoneme_length 入力accent_phrasesを変更せずにそのまま返します [Mock]
-
- Parameters
- ----------
- accent_phrases : List[AccentPhrase]
- フレーズ句のリスト
- speaker_id : int
- 話者
-
- Returns
- -------
- List[AccentPhrase]
- フレーズ句のリスト(変更なし)
- """
- return accent_phrases
-
- def replace_mora_pitch(
- self, accent_phrases: List[AccentPhrase], speaker_id: int
- ) -> List[AccentPhrase]:
- """
- replace_mora_pitch 入力accent_phrasesを変更せずにそのまま返します [Mock]
-
- Parameters
- ----------
- accent_phrases : List[AccentPhrase]
- フレーズ句のリスト
- speaker_id : int
- 話者
-
- Returns
- -------
- List[AccentPhrase]
- フレーズ句のリスト(変更なし)
- """
- return accent_phrases
-
- def _synthesis_impl(self, query: AudioQuery, speaker_id: int) -> np.ndarray:
- """
- synthesis voicevox coreを使わずに、音声合成する [Mock]
-
- Parameters
- ----------
- query : AudioQuery
- /audio_query APIで得たjson
- speaker_id : int
- 話者
-
- Returns
- -------
- wave [npt.NDArray[np.int16]]
- 音声波形データをNumPy配列で返します
- """
- # recall text in katakana
- flatten_moras = to_flatten_moras(query.accent_phrases)
- kana_text = "".join([mora.text for mora in flatten_moras])
-
- wave = self.forward(kana_text)
-
- # volume
- wave *= query.volumeScale
-
- return wave.astype("int16")
-
- def forward(self, text: str, **kwargs: Dict[str, Any]) -> np.ndarray:
- """
- forward tts via pyopenjtalk.tts()
- 参照→SynthesisEngine のdocstring [Mock]
-
- Parameters
- ----------
- text : str
- 入力文字列(例:読み上げたい文章をカタカナにした文字列、等)
-
- Returns
- -------
- wave [npt.NDArray[np.int16]]
- 音声波形データをNumPy配列で返します
-
- Note
- -------
- ここで行う音声合成では、調声(ピッチ等)を反映しない
-
- # pyopenjtalk.tts()の出力仕様
- dtype=np.float64, 16 bit, mono 48000 Hz
-
- # resampleの説明
- 非モック実装(decode_forward)と合わせるために、出力を24kHzに変換した。
- """
- logger = getLogger("uvicorn") # FastAPI / Uvicorn 内からの利用のため
- logger.info("[Mock] input text: %s" % text)
- wave, sr = tts(text)
- wave = resample(wave, 24000 * len(wave) // 48000)
- return wave
diff --git a/spaces/Geethanjali/YouTube_Transcript_Summarizer/app.py b/spaces/Geethanjali/YouTube_Transcript_Summarizer/app.py
deleted file mode 100644
index 7343b665aacad17de26bd7d29c3001aecea88bcf..0000000000000000000000000000000000000000
--- a/spaces/Geethanjali/YouTube_Transcript_Summarizer/app.py
+++ /dev/null
@@ -1,57 +0,0 @@
-from transformers import pipeline
-from youtube_transcript_api import YouTubeTranscriptApi
-from keybert import KeyBERT
-import gradio as gr
-from keyphrase_vectorizers import KeyphraseCountVectorizer
-import requests
-from bs4 import BeautifulSoup
-
-def summarize_transcript(url):
- video_id = url.split("=")[1]
-
- transcript = YouTubeTranscriptApi.get_transcript(video_id)
-
- result = ""
- for i in transcript:
- result += ' ' + i['text']
-
- summarizer = pipeline('summarization')
-
- num_iters = int(len(result)/1000)
- summarized_text = []
- for i in range(0, num_iters + 1):
- start = 0
- start = i * 1000
- end = (i + 1) * 1000
- out = summarizer(result[start:end])
- out = out[0]
- out = out['summary_text']
- summarized_text.append(out)
- summ = str(summarized_text)
- print(summ)
-
- #keywords
- words = []
- kw_model = KeyBERT()
- keywords = kw_model.extract_keywords(summ)
- w = kw_model.extract_keywords(summ, vectorizer=KeyphraseCountVectorizer())
- for s in w:
- words.append(s[0])
-
- #tags
- request = requests.get(url)
- html = BeautifulSoup(request.content,"html.parser")
- tags = html.find_all("meta",property = "og:video:tag")
- lst = []
- for tag in tags:
- lst.append(tag['content'])
- return (summ,words,lst)
-
-gradio_ui = gr.Interface(fn = summarize_transcript,
- inputs = [gr.inputs.Textbox(label = "Enter the YouTube URL below:")],
- outputs = [gr.outputs.Textbox(label = "Transcript Summary"),gr.outputs.Textbox(label = "Keywords"),gr.outputs.Textbox(label = "Hash Tags")],
- title = "YouTube Transcript Summarizer",
- theme = "grass",
- description = "Here You can see the SUMMARY,KEYWORDS and HASHTAGS of the YouTube video you want to watch")
-
-gradio_ui.launch(inline = False)
\ No newline at end of file
diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/atss/README.md b/spaces/Gradio-Blocks/uniformer_image_detection/configs/atss/README.md
deleted file mode 100644
index 4ba915002576080f1fc1b2e007420d88aeb94187..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/atss/README.md
+++ /dev/null
@@ -1,21 +0,0 @@
-# Bridging the Gap Between Anchor-based and Anchor-free Detection via Adaptive Training Sample Selection
-
-## Introduction
-
-[ALGORITHM]
-
-```latex
-@article{zhang2019bridging,
- title = {Bridging the Gap Between Anchor-based and Anchor-free Detection via Adaptive Training Sample Selection},
- author = {Zhang, Shifeng and Chi, Cheng and Yao, Yongqiang and Lei, Zhen and Li, Stan Z.},
- journal = {arXiv preprint arXiv:1912.02424},
- year = {2019}
-}
-```
-
-## Results and Models
-
-| Backbone | Style | Lr schd | Mem (GB) | Inf time (fps) | box AP | Config | Download |
-|:---------:|:-------:|:-------:|:--------:|:--------------:|:------:|:------:|:--------:|
-| R-50 | pytorch | 1x | 3.7 | 19.7 | 39.4 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/atss/atss_r50_fpn_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/atss/atss_r50_fpn_1x_coco/atss_r50_fpn_1x_coco_20200209-985f7bd0.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/atss/atss_r50_fpn_1x_coco/atss_r50_fpn_1x_coco_20200209_102539.log.json) |
-| R-101 | pytorch | 1x | 5.6 | 12.3 | 41.5 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/atss/atss_r101_fpn_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/atss/atss_r101_fpn_1x_coco/atss_r101_fpn_1x_20200825-dfcadd6f.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/atss/atss_r101_fpn_1x_coco/atss_r101_fpn_1x_20200825-dfcadd6f.log.json) |
diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/models/detectors/sparse_rcnn.py b/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/models/detectors/sparse_rcnn.py
deleted file mode 100644
index 0dbd0250f189e610a0bbc72b0dab2559e26857ae..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/models/detectors/sparse_rcnn.py
+++ /dev/null
@@ -1,110 +0,0 @@
-from ..builder import DETECTORS
-from .two_stage import TwoStageDetector
-
-
-@DETECTORS.register_module()
-class SparseRCNN(TwoStageDetector):
- r"""Implementation of `Sparse R-CNN: End-to-End Object Detection with
- Learnable Proposals `_"""
-
- def __init__(self, *args, **kwargs):
- super(SparseRCNN, self).__init__(*args, **kwargs)
- assert self.with_rpn, 'Sparse R-CNN do not support external proposals'
-
- def forward_train(self,
- img,
- img_metas,
- gt_bboxes,
- gt_labels,
- gt_bboxes_ignore=None,
- gt_masks=None,
- proposals=None,
- **kwargs):
- """Forward function of SparseR-CNN in train stage.
-
- Args:
- img (Tensor): of shape (N, C, H, W) encoding input images.
- Typically these should be mean centered and std scaled.
- img_metas (list[dict]): list of image info dict where each dict
- has: 'img_shape', 'scale_factor', 'flip', and may also contain
- 'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'.
- For details on the values of these keys see
- :class:`mmdet.datasets.pipelines.Collect`.
- gt_bboxes (list[Tensor]): Ground truth bboxes for each image with
- shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format.
- gt_labels (list[Tensor]): class indices corresponding to each box
- gt_bboxes_ignore (None | list[Tensor): specify which bounding
- boxes can be ignored when computing the loss.
- gt_masks (List[Tensor], optional) : Segmentation masks for
- each box. But we don't support it in this architecture.
- proposals (List[Tensor], optional): override rpn proposals with
- custom proposals. Use when `with_rpn` is False.
-
- Returns:
- dict[str, Tensor]: a dictionary of loss components
- """
-
- assert proposals is None, 'Sparse R-CNN does not support' \
- ' external proposals'
- assert gt_masks is None, 'Sparse R-CNN does not instance segmentation'
-
- x = self.extract_feat(img)
- proposal_boxes, proposal_features, imgs_whwh = \
- self.rpn_head.forward_train(x, img_metas)
- roi_losses = self.roi_head.forward_train(
- x,
- proposal_boxes,
- proposal_features,
- img_metas,
- gt_bboxes,
- gt_labels,
- gt_bboxes_ignore=gt_bboxes_ignore,
- gt_masks=gt_masks,
- imgs_whwh=imgs_whwh)
- return roi_losses
-
- def simple_test(self, img, img_metas, rescale=False):
- """Test function without test time augmentation.
-
- Args:
- imgs (list[torch.Tensor]): List of multiple images
- img_metas (list[dict]): List of image information.
- rescale (bool): Whether to rescale the results.
- Defaults to False.
-
- Returns:
- list[list[np.ndarray]]: BBox results of each image and classes.
- The outer list corresponds to each image. The inner list
- corresponds to each class.
- """
- x = self.extract_feat(img)
- proposal_boxes, proposal_features, imgs_whwh = \
- self.rpn_head.simple_test_rpn(x, img_metas)
- bbox_results = self.roi_head.simple_test(
- x,
- proposal_boxes,
- proposal_features,
- img_metas,
- imgs_whwh=imgs_whwh,
- rescale=rescale)
- return bbox_results
-
- def forward_dummy(self, img):
- """Used for computing network flops.
-
- See `mmdetection/tools/analysis_tools/get_flops.py`
- """
- # backbone
- x = self.extract_feat(img)
- # rpn
- num_imgs = len(img)
- dummy_img_metas = [
- dict(img_shape=(800, 1333, 3)) for _ in range(num_imgs)
- ]
- proposal_boxes, proposal_features, imgs_whwh = \
- self.rpn_head.simple_test_rpn(x, dummy_img_metas)
- # roi_head
- roi_outs = self.roi_head.forward_dummy(x, proposal_boxes,
- proposal_features,
- dummy_img_metas)
- return roi_outs
diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/deeplabv3plus/deeplabv3plus_r50-d8_512x512_40k_voc12aug.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/deeplabv3plus/deeplabv3plus_r50-d8_512x512_40k_voc12aug.py
deleted file mode 100644
index e36c83ba601884b81c06ee69445a94e76224c828..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/deeplabv3plus/deeplabv3plus_r50-d8_512x512_40k_voc12aug.py
+++ /dev/null
@@ -1,7 +0,0 @@
-_base_ = [
- '../_base_/models/deeplabv3plus_r50-d8.py',
- '../_base_/datasets/pascal_voc12_aug.py', '../_base_/default_runtime.py',
- '../_base_/schedules/schedule_40k.py'
-]
-model = dict(
- decode_head=dict(num_classes=21), auxiliary_head=dict(num_classes=21))
diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/psanet/psanet_r50-d8_512x1024_80k_cityscapes.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/psanet/psanet_r50-d8_512x1024_80k_cityscapes.py
deleted file mode 100644
index a441013a4c1adc39fc064dbac23caaac9efdc4a6..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/psanet/psanet_r50-d8_512x1024_80k_cityscapes.py
+++ /dev/null
@@ -1,4 +0,0 @@
-_base_ = [
- '../_base_/models/psanet_r50-d8.py', '../_base_/datasets/cityscapes.py',
- '../_base_/default_runtime.py', '../_base_/schedules/schedule_80k.py'
-]
diff --git a/spaces/GroveStreet/GTA_SOVITS/vdecoder/nsf_hifigan/env.py b/spaces/GroveStreet/GTA_SOVITS/vdecoder/nsf_hifigan/env.py
deleted file mode 100644
index 2bdbc95d4f7a8bad8fd4f5eef657e2b51d946056..0000000000000000000000000000000000000000
--- a/spaces/GroveStreet/GTA_SOVITS/vdecoder/nsf_hifigan/env.py
+++ /dev/null
@@ -1,15 +0,0 @@
-import os
-import shutil
-
-
-class AttrDict(dict):
- def __init__(self, *args, **kwargs):
- super(AttrDict, self).__init__(*args, **kwargs)
- self.__dict__ = self
-
-
-def build_env(config, config_name, path):
- t_path = os.path.join(path, config_name)
- if config != t_path:
- os.makedirs(path, exist_ok=True)
- shutil.copyfile(config, os.path.join(path, config_name))
diff --git a/spaces/Hanqix/oxford_pet_classify/app.py b/spaces/Hanqix/oxford_pet_classify/app.py
deleted file mode 100644
index 2a6f059ba8c4dec08df900b2afbb5481b6a5f37a..0000000000000000000000000000000000000000
--- a/spaces/Hanqix/oxford_pet_classify/app.py
+++ /dev/null
@@ -1,21 +0,0 @@
-from fastai.vision.all import *
-import gradio as gr
-learn = load_learner('petClassify.pkl')
-labels = learn.dls.vocab
-def predict(img):
- img = PILImage.create(img)
- pred, idx, probs = learn.predict(img)
- return {labels[i] : float(probs[i]) for i in range(len(labels))}
-
-
-# print(predict('licensed-image.jpeg'))
-title = "Pet classifier"
-description = "Oxford pets classifier based on fine tuned resnet 50"
-article = "Plaintext"
-enable_queue = True
-interpretation= 'default'
-examples = ['licensed-image.jpeg']
-gr.Interface(fn=predict, inputs=gr.inputs.Image(shape=(512, 512)), outputs=gr.outputs.Label(num_top_classes=3),
- description=description, title=title, article=article,
- examples=examples, interpretation=interpretation
- ).launch(share=False)
\ No newline at end of file
diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/data/token_block_dataset.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/data/token_block_dataset.py
deleted file mode 100644
index d2c65fd7e058072911c3aa60bfc760288a0f83e5..0000000000000000000000000000000000000000
--- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/data/token_block_dataset.py
+++ /dev/null
@@ -1,202 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import numpy as np
-import torch
-from fairseq.data import FairseqDataset, plasma_utils
-from fairseq.data.indexed_dataset import best_fitting_int_dtype
-from typing import Tuple
-
-
-class TokenBlockDataset(FairseqDataset):
- """Break a Dataset of tokens into blocks.
-
- Args:
- dataset (~torch.utils.data.Dataset): dataset to break into blocks
- sizes (List[int]): sentence lengths (required for 'complete' and 'eos')
- block_size (int): maximum block size (ignored in 'eos' break mode)
- break_mode (str, optional): Mode used for breaking tokens. Values can
- be one of:
- - 'none': break tokens into equally sized blocks (up to block_size)
- - 'complete': break tokens into blocks (up to block_size) such that
- blocks contains complete sentences, although block_size may be
- exceeded if some sentences exceed block_size
- - 'complete_doc': similar to 'complete' mode, but do not
- cross document boundaries
- - 'eos': each block contains one sentence (block_size is ignored)
- include_targets (bool, optional): return next tokens as targets
- (default: False).
- document_sep_len (int, optional): document separator size (required for
- 'complete_doc' break mode). Typically 1 if the sentences have eos
- and 0 otherwise.
- """
-
- def __init__(
- self,
- dataset,
- sizes,
- block_size,
- pad,
- eos,
- break_mode=None,
- include_targets=False,
- document_sep_len=1,
- use_plasma_view=False,
- split_path=None,
- plasma_path=None,
- ):
-
- super().__init__()
- self.dataset = dataset
- self.pad = pad
- self.eos = eos
- self.include_targets = include_targets
-
- assert len(dataset) > 0
-
- assert len(dataset) == len(sizes)
- _sizes, block_to_dataset_index, slice_indices = self._build_slice_indices(
- sizes, break_mode, document_sep_len, block_size
- )
- if use_plasma_view:
- plasma_id = (block_size, document_sep_len, str(break_mode), len(dataset))
- self._slice_indices = plasma_utils.PlasmaView(
- slice_indices, split_path, (plasma_id, 0), plasma_path=plasma_path
- )
- self._sizes = plasma_utils.PlasmaView(
- _sizes, split_path, (plasma_id, 1), plasma_path=plasma_path
- )
- self._block_to_dataset_index = plasma_utils.PlasmaView(
- block_to_dataset_index, split_path, (plasma_id, 2), plasma_path=plasma_path,
- )
- else:
- self._slice_indices = plasma_utils.PlasmaArray(slice_indices)
- self._sizes = plasma_utils.PlasmaArray(_sizes)
- self._block_to_dataset_index = plasma_utils.PlasmaArray(
- block_to_dataset_index
- )
-
- @staticmethod
- def _build_slice_indices(
- sizes, break_mode, document_sep_len, block_size
- ) -> Tuple[np.ndarray]:
- """Use token_block_utils_fast to build arrays for indexing into self.dataset"""
- try:
- from fairseq.data.token_block_utils_fast import (
- _get_slice_indices_fast,
- _get_block_to_dataset_index_fast,
- )
- except ImportError:
- raise ImportError(
- "Please build Cython components with: `pip install --editable .` "
- "or `python setup.py build_ext --inplace`"
- )
-
- if isinstance(sizes, list):
- sizes = np.array(sizes, dtype=np.int64)
- else:
- if torch.is_tensor(sizes):
- sizes = sizes.numpy()
- sizes = sizes.astype(np.int64)
-
- break_mode = break_mode if break_mode is not None else "none"
-
- # For "eos" break-mode, block_size is not required parameters.
- if break_mode == "eos" and block_size is None:
- block_size = 0
-
- slice_indices = _get_slice_indices_fast(
- sizes, str(break_mode), block_size, document_sep_len
- )
- _sizes = slice_indices[:, 1] - slice_indices[:, 0]
-
- # build index mapping block indices to the underlying dataset indices
- if break_mode == "eos":
- # much faster version for eos break mode
- block_to_dataset_index = np.stack(
- [
- np.arange(len(sizes)), # starting index in dataset
- np.zeros(
- len(sizes), dtype=np.compat.long
- ), # starting offset within starting index
- np.arange(len(sizes)), # ending index in dataset
- ],
- 1,
- )
- else:
- block_to_dataset_index = _get_block_to_dataset_index_fast(
- sizes, slice_indices,
- )
- size_dtype = np.uint16 if block_size < 65535 else np.uint32
- num_tokens = slice_indices[-1].max()
- slice_indices_dtype = best_fitting_int_dtype(num_tokens)
- slice_indices = slice_indices.astype(slice_indices_dtype)
- _sizes = _sizes.astype(size_dtype)
- block_to_dataset_index = block_to_dataset_index.astype(slice_indices_dtype)
- return _sizes, block_to_dataset_index, slice_indices
-
- @property
- def slice_indices(self):
- return self._slice_indices.array
-
- @property
- def sizes(self):
- return self._sizes.array
-
- @property
- def block_to_dataset_index(self):
- return self._block_to_dataset_index.array
-
- def attr(self, attr: str, index: int):
- start_ds_idx, _, _ = self.block_to_dataset_index[index]
- return self.dataset.attr(attr, start_ds_idx)
-
- def __getitem__(self, index):
- start_ds_idx, start_offset, end_ds_idx = self.block_to_dataset_index[index]
-
- buffer = torch.cat(
- [self.dataset[idx] for idx in range(start_ds_idx, end_ds_idx + 1)]
- )
- slice_s, slice_e = self.slice_indices[index]
- length = slice_e - slice_s
- s, e = start_offset, start_offset + length
- item = buffer[s:e]
-
- if self.include_targets:
- # *target* is the original sentence (=item)
- # *source* is shifted right by 1 (maybe left-padded with eos)
- # *past_target* is shifted right by 2 (left-padded as needed)
- if s == 0:
- source = torch.cat([item.new([self.eos]), buffer[0 : e - 1]])
- past_target = torch.cat(
- [item.new([self.pad, self.eos]), buffer[0 : e - 2]]
- )
- else:
- source = buffer[s - 1 : e - 1]
- if s == 1:
- past_target = torch.cat([item.new([self.eos]), buffer[0 : e - 2]])
- else:
- past_target = buffer[s - 2 : e - 2]
-
- return source, item, past_target
-
- return item
-
- def __len__(self):
- return len(self.slice_indices)
-
- @property
- def supports_prefetch(self):
- return getattr(self.dataset, "supports_prefetch", False)
-
- def prefetch(self, indices):
- self.dataset.prefetch(
- {
- ds_idx
- for index in indices
- for start_ds_idx, _, end_ds_idx in [self.block_to_dataset_index[index]]
- for ds_idx in range(start_ds_idx, end_ds_idx + 1)
- }
- )
diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq_cli/preprocess.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq_cli/preprocess.py
deleted file mode 100644
index 4ee9a1e3ba08f9f6ef4c01b9ee34374c9528eb19..0000000000000000000000000000000000000000
--- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq_cli/preprocess.py
+++ /dev/null
@@ -1,409 +0,0 @@
-#!/usr/bin/env python3
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-"""
-Data pre-processing: build vocabularies and binarize training data.
-"""
-
-import logging
-import os
-import shutil
-import sys
-from collections import Counter
-from itertools import zip_longest
-from multiprocessing import Pool
-
-from fairseq import options, tasks, utils
-from fairseq.binarizer import Binarizer
-from fairseq.data import indexed_dataset
-from fairseq.file_chunker_utils import find_offsets
-
-logging.basicConfig(
- format="%(asctime)s | %(levelname)s | %(name)s | %(message)s",
- datefmt="%Y-%m-%d %H:%M:%S",
- level=os.environ.get("LOGLEVEL", "INFO").upper(),
- stream=sys.stdout,
-)
-logger = logging.getLogger("fairseq_cli.preprocess")
-
-
-def main(args):
- utils.import_user_module(args)
-
- os.makedirs(args.destdir, exist_ok=True)
-
- logger.addHandler(
- logging.FileHandler(
- filename=os.path.join(args.destdir, "preprocess.log"),
- )
- )
- logger.info(args)
-
- assert args.dataset_impl != "huffman", "preprocessing.py doesn't support Huffman yet, use HuffmanCodeBuilder directly."
-
- task = tasks.get_task(args.task)
-
- def train_path(lang):
- return "{}{}".format(args.trainpref, ("." + lang) if lang else "")
-
- def file_name(prefix, lang):
- fname = prefix
- if lang is not None:
- fname += ".{lang}".format(lang=lang)
- return fname
-
- def dest_path(prefix, lang):
- return os.path.join(args.destdir, file_name(prefix, lang))
-
- def dict_path(lang):
- return dest_path("dict", lang) + ".txt"
-
- def build_dictionary(filenames, src=False, tgt=False):
- assert src ^ tgt
- return task.build_dictionary(
- filenames,
- workers=args.workers,
- threshold=args.thresholdsrc if src else args.thresholdtgt,
- nwords=args.nwordssrc if src else args.nwordstgt,
- padding_factor=args.padding_factor,
- )
-
- target = not args.only_source
-
- if not args.srcdict and os.path.exists(dict_path(args.source_lang)):
- raise FileExistsError(dict_path(args.source_lang))
- if target and not args.tgtdict and os.path.exists(dict_path(args.target_lang)):
- raise FileExistsError(dict_path(args.target_lang))
-
- if args.joined_dictionary:
- assert (
- not args.srcdict or not args.tgtdict
- ), "cannot use both --srcdict and --tgtdict with --joined-dictionary"
-
- if args.srcdict:
- src_dict = task.load_dictionary(args.srcdict)
- elif args.tgtdict:
- src_dict = task.load_dictionary(args.tgtdict)
- else:
- assert (
- args.trainpref
- ), "--trainpref must be set if --srcdict is not specified"
- src_dict = build_dictionary(
- {train_path(lang) for lang in [args.source_lang, args.target_lang]},
- src=True,
- )
- tgt_dict = src_dict
- else:
- if args.srcdict:
- src_dict = task.load_dictionary(args.srcdict)
- else:
- assert (
- args.trainpref
- ), "--trainpref must be set if --srcdict is not specified"
- src_dict = build_dictionary([train_path(args.source_lang)], src=True)
-
- if target:
- if args.tgtdict:
- tgt_dict = task.load_dictionary(args.tgtdict)
- else:
- assert (
- args.trainpref
- ), "--trainpref must be set if --tgtdict is not specified"
- tgt_dict = build_dictionary([train_path(args.target_lang)], tgt=True)
- else:
- tgt_dict = None
-
- src_dict.save(dict_path(args.source_lang))
- if target and tgt_dict is not None:
- tgt_dict.save(dict_path(args.target_lang))
-
- if args.dict_only:
- return
-
- def make_binary_dataset(vocab, input_prefix, output_prefix, lang, num_workers):
- logger.info("[{}] Dictionary: {} types".format(lang, len(vocab)))
- n_seq_tok = [0, 0]
- replaced = Counter()
-
- def merge_result(worker_result):
- replaced.update(worker_result["replaced"])
- n_seq_tok[0] += worker_result["nseq"]
- n_seq_tok[1] += worker_result["ntok"]
-
- input_file = "{}{}".format(
- input_prefix, ("." + lang) if lang is not None else ""
- )
- offsets = find_offsets(input_file, num_workers)
- (first_chunk, *more_chunks) = zip(offsets, offsets[1:])
- pool = None
- if num_workers > 1:
- pool = Pool(processes=num_workers - 1)
- for worker_id, (start_offset, end_offset) in enumerate(
- more_chunks, start=1
- ):
- prefix = "{}{}".format(output_prefix, worker_id)
- pool.apply_async(
- binarize,
- (
- args,
- input_file,
- vocab,
- prefix,
- lang,
- start_offset,
- end_offset,
- ),
- callback=merge_result,
- )
- pool.close()
-
- ds = indexed_dataset.make_builder(
- dataset_dest_file(args, output_prefix, lang, "bin"),
- impl=args.dataset_impl,
- vocab_size=len(vocab),
- )
- merge_result(
- Binarizer.binarize(
- input_file,
- vocab,
- lambda t: ds.add_item(t),
- offset=first_chunk[0],
- end=first_chunk[1],
- )
- )
- if num_workers > 1:
- pool.join()
- for worker_id in range(1, num_workers):
- prefix = "{}{}".format(output_prefix, worker_id)
- temp_file_path = dataset_dest_prefix(args, prefix, lang)
- ds.merge_file_(temp_file_path)
- os.remove(indexed_dataset.data_file_path(temp_file_path))
- os.remove(indexed_dataset.index_file_path(temp_file_path))
-
- ds.finalize(dataset_dest_file(args, output_prefix, lang, "idx"))
-
- logger.info(
- "[{}] {}: {} sents, {} tokens, {:.3}% replaced by {}".format(
- lang,
- input_file,
- n_seq_tok[0],
- n_seq_tok[1],
- 100 * sum(replaced.values()) / n_seq_tok[1],
- vocab.unk_word,
- )
- )
-
- def make_binary_alignment_dataset(input_prefix, output_prefix, num_workers):
- nseq = [0]
-
- def merge_result(worker_result):
- nseq[0] += worker_result["nseq"]
-
- input_file = input_prefix
- offsets = find_offsets(input_file, num_workers)
- (first_chunk, *more_chunks) = zip(offsets, offsets[1:])
- pool = None
- if num_workers > 1:
- pool = Pool(processes=num_workers - 1)
- for worker_id, (start_offset, end_offset) in enumerate(
- more_chunks, start=1
- ):
- prefix = "{}{}".format(output_prefix, worker_id)
- pool.apply_async(
- binarize_alignments,
- (
- args,
- input_file,
- utils.parse_alignment,
- prefix,
- start_offset,
- end_offset,
- ),
- callback=merge_result,
- )
- pool.close()
-
- ds = indexed_dataset.make_builder(
- dataset_dest_file(args, output_prefix, None, "bin"), impl=args.dataset_impl
- )
-
- merge_result(
- Binarizer.binarize_alignments(
- input_file,
- utils.parse_alignment,
- lambda t: ds.add_item(t),
- offset=first_chunk[0],
- end=first_chunk[1],
- )
- )
- if num_workers > 1:
- pool.join()
- for worker_id in range(1, num_workers):
- prefix = "{}{}".format(output_prefix, worker_id)
- temp_file_path = dataset_dest_prefix(args, prefix, None)
- ds.merge_file_(temp_file_path)
- os.remove(indexed_dataset.data_file_path(temp_file_path))
- os.remove(indexed_dataset.index_file_path(temp_file_path))
-
- ds.finalize(dataset_dest_file(args, output_prefix, None, "idx"))
-
- logger.info("[alignments] {}: parsed {} alignments".format(input_file, nseq[0]))
-
- def make_dataset(vocab, input_prefix, output_prefix, lang, num_workers=1):
- if args.dataset_impl == "raw":
- # Copy original text file to destination folder
- output_text_file = dest_path(
- output_prefix + ".{}-{}".format(args.source_lang, args.target_lang),
- lang,
- )
- shutil.copyfile(file_name(input_prefix, lang), output_text_file)
- else:
- make_binary_dataset(vocab, input_prefix, output_prefix, lang, num_workers)
-
- def make_all(lang, vocab):
- if args.trainpref:
- make_dataset(vocab, args.trainpref, "train", lang, num_workers=args.workers)
- if args.validpref:
- for k, validpref in enumerate(args.validpref.split(",")):
- outprefix = "valid{}".format(k) if k > 0 else "valid"
- make_dataset(
- vocab, validpref, outprefix, lang, num_workers=args.workers
- )
- if args.testpref:
- for k, testpref in enumerate(args.testpref.split(",")):
- outprefix = "test{}".format(k) if k > 0 else "test"
- make_dataset(vocab, testpref, outprefix, lang, num_workers=args.workers)
-
- def make_all_alignments():
- if args.trainpref and os.path.exists(args.trainpref + "." + args.align_suffix):
- make_binary_alignment_dataset(
- args.trainpref + "." + args.align_suffix,
- "train.align",
- num_workers=args.workers,
- )
- if args.validpref and os.path.exists(args.validpref + "." + args.align_suffix):
- make_binary_alignment_dataset(
- args.validpref + "." + args.align_suffix,
- "valid.align",
- num_workers=args.workers,
- )
- if args.testpref and os.path.exists(args.testpref + "." + args.align_suffix):
- make_binary_alignment_dataset(
- args.testpref + "." + args.align_suffix,
- "test.align",
- num_workers=args.workers,
- )
-
- make_all(args.source_lang, src_dict)
- if target:
- make_all(args.target_lang, tgt_dict)
- if args.align_suffix:
- make_all_alignments()
-
- logger.info("Wrote preprocessed data to {}".format(args.destdir))
-
- if args.alignfile:
- assert args.trainpref, "--trainpref must be set if --alignfile is specified"
- src_file_name = train_path(args.source_lang)
- tgt_file_name = train_path(args.target_lang)
- freq_map = {}
- with open(args.alignfile, "r", encoding="utf-8") as align_file:
- with open(src_file_name, "r", encoding="utf-8") as src_file:
- with open(tgt_file_name, "r", encoding="utf-8") as tgt_file:
- for a, s, t in zip_longest(align_file, src_file, tgt_file):
- si = src_dict.encode_line(s, add_if_not_exist=False)
- ti = tgt_dict.encode_line(t, add_if_not_exist=False)
- ai = list(map(lambda x: tuple(x.split("-")), a.split()))
- for sai, tai in ai:
- srcidx = si[int(sai)]
- tgtidx = ti[int(tai)]
- if srcidx != src_dict.unk() and tgtidx != tgt_dict.unk():
- assert srcidx != src_dict.pad()
- assert srcidx != src_dict.eos()
- assert tgtidx != tgt_dict.pad()
- assert tgtidx != tgt_dict.eos()
-
- if srcidx not in freq_map:
- freq_map[srcidx] = {}
- if tgtidx not in freq_map[srcidx]:
- freq_map[srcidx][tgtidx] = 1
- else:
- freq_map[srcidx][tgtidx] += 1
-
- align_dict = {}
- for srcidx in freq_map.keys():
- align_dict[srcidx] = max(freq_map[srcidx], key=freq_map[srcidx].get)
-
- with open(
- os.path.join(
- args.destdir,
- "alignment.{}-{}.txt".format(args.source_lang, args.target_lang),
- ),
- "w",
- encoding="utf-8",
- ) as f:
- for k, v in align_dict.items():
- print("{} {}".format(src_dict[k], tgt_dict[v]), file=f)
-
-
-def binarize(args, filename, vocab, output_prefix, lang, offset, end, append_eos=True):
- ds = indexed_dataset.make_builder(
- dataset_dest_file(args, output_prefix, lang, "bin"),
- impl=args.dataset_impl,
- vocab_size=len(vocab),
- )
-
- def consumer(tensor):
- ds.add_item(tensor)
-
- res = Binarizer.binarize(
- filename, vocab, consumer, append_eos=append_eos, offset=offset, end=end
- )
- ds.finalize(dataset_dest_file(args, output_prefix, lang, "idx"))
- return res
-
-
-def binarize_alignments(args, filename, parse_alignment, output_prefix, offset, end):
- ds = indexed_dataset.make_builder(
- dataset_dest_file(args, output_prefix, None, "bin"),
- impl=args.dataset_impl,
- vocab_size=None,
- )
-
- def consumer(tensor):
- ds.add_item(tensor)
-
- res = Binarizer.binarize_alignments(
- filename, parse_alignment, consumer, offset=offset, end=end
- )
- ds.finalize(dataset_dest_file(args, output_prefix, None, "idx"))
- return res
-
-
-def dataset_dest_prefix(args, output_prefix, lang):
- base = "{}/{}".format(args.destdir, output_prefix)
- if lang is not None:
- lang_part = ".{}-{}.{}".format(args.source_lang, args.target_lang, lang)
- elif args.only_source:
- lang_part = ""
- else:
- lang_part = ".{}-{}".format(args.source_lang, args.target_lang)
-
- return "{}{}".format(base, lang_part)
-
-
-def dataset_dest_file(args, output_prefix, lang, extension):
- base = dataset_dest_prefix(args, output_prefix, lang)
- return "{}.{}".format(base, extension)
-
-
-def cli_main():
- parser = options.get_preprocessing_parser()
- args = parser.parse_args()
- main(args)
-
-
-if __name__ == "__main__":
- cli_main()
diff --git a/spaces/Iceclear/StableSR/StableSR/ldm/modules/spade.py b/spaces/Iceclear/StableSR/StableSR/ldm/modules/spade.py
deleted file mode 100644
index 72845bdfb5ac0139aaa509681208804dc8444e71..0000000000000000000000000000000000000000
--- a/spaces/Iceclear/StableSR/StableSR/ldm/modules/spade.py
+++ /dev/null
@@ -1,111 +0,0 @@
-"""
-Copyright (C) 2019 NVIDIA Corporation. All rights reserved.
-Licensed under the CC BY-NC-SA 4.0 license (https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode).
-"""
-
-import re
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-# from models.networks.sync_batchnorm import SynchronizedBatchNorm2d
-import torch.nn.utils.spectral_norm as spectral_norm
-
-from ldm.modules.diffusionmodules.util import normalization
-
-
-# Returns a function that creates a normalization function
-# that does not condition on semantic map
-def get_nonspade_norm_layer(opt, norm_type='instance'):
- # helper function to get # output channels of the previous layer
- def get_out_channel(layer):
- if hasattr(layer, 'out_channels'):
- return getattr(layer, 'out_channels')
- return layer.weight.size(0)
-
- # this function will be returned
- def add_norm_layer(layer):
- nonlocal norm_type
- if norm_type.startswith('spectral'):
- layer = spectral_norm(layer)
- subnorm_type = norm_type[len('spectral'):]
-
- if subnorm_type == 'none' or len(subnorm_type) == 0:
- return layer
-
- # remove bias in the previous layer, which is meaningless
- # since it has no effect after normalization
- if getattr(layer, 'bias', None) is not None:
- delattr(layer, 'bias')
- layer.register_parameter('bias', None)
-
- if subnorm_type == 'batch':
- norm_layer = nn.BatchNorm2d(get_out_channel(layer), affine=True)
- elif subnorm_type == 'sync_batch':
- norm_layer = SynchronizedBatchNorm2d(get_out_channel(layer), affine=True)
- elif subnorm_type == 'instance':
- norm_layer = nn.InstanceNorm2d(get_out_channel(layer), affine=False)
- else:
- raise ValueError('normalization layer %s is not recognized' % subnorm_type)
-
- return nn.Sequential(layer, norm_layer)
-
- return add_norm_layer
-
-
-# Creates SPADE normalization layer based on the given configuration
-# SPADE consists of two steps. First, it normalizes the activations using
-# your favorite normalization method, such as Batch Norm or Instance Norm.
-# Second, it applies scale and bias to the normalized output, conditioned on
-# the segmentation map.
-# The format of |config_text| is spade(norm)(ks), where
-# (norm) specifies the type of parameter-free normalization.
-# (e.g. syncbatch, batch, instance)
-# (ks) specifies the size of kernel in the SPADE module (e.g. 3x3)
-# Example |config_text| will be spadesyncbatch3x3, or spadeinstance5x5.
-# Also, the other arguments are
-# |norm_nc|: the #channels of the normalized activations, hence the output dim of SPADE
-# |label_nc|: the #channels of the input semantic map, hence the input dim of SPADE
-class SPADE(nn.Module):
- def __init__(self, norm_nc, label_nc, config_text='spadeinstance3x3'):
- super().__init__()
-
- assert config_text.startswith('spade')
- parsed = re.search('spade(\D+)(\d)x\d', config_text)
- param_free_norm_type = str(parsed.group(1))
- ks = int(parsed.group(2))
-
- self.param_free_norm = normalization(norm_nc)
-
- # The dimension of the intermediate embedding space. Yes, hardcoded.
- nhidden = 128
-
- pw = ks // 2
- self.mlp_shared = nn.Sequential(
- nn.Conv2d(label_nc, nhidden, kernel_size=ks, padding=pw),
- nn.ReLU()
- )
- self.mlp_gamma = nn.Conv2d(nhidden, norm_nc, kernel_size=ks, padding=pw)
- self.mlp_beta = nn.Conv2d(nhidden, norm_nc, kernel_size=ks, padding=pw)
-
- def forward(self, x_dic, segmap_dic, size=None):
-
- if size is None:
- segmap = segmap_dic[str(x_dic.size(-1))]
- x = x_dic
- else:
- x = x_dic[str(size)]
- segmap = segmap_dic[str(size)]
-
- # Part 1. generate parameter-free normalized activations
- normalized = self.param_free_norm(x)
-
- # Part 2. produce scaling and bias conditioned on semantic map
- # segmap = F.interpolate(segmap, size=x.size()[2:], mode='nearest')
- actv = self.mlp_shared(segmap)
- gamma = self.mlp_gamma(actv)
- beta = self.mlp_beta(actv)
-
- # apply scale and bias
- out = normalized * (1 + gamma) + beta
-
- return out
diff --git a/spaces/Ikaros521/so-vits-svc-4.0-ikaros2/vdecoder/hifigan/utils.py b/spaces/Ikaros521/so-vits-svc-4.0-ikaros2/vdecoder/hifigan/utils.py
deleted file mode 100644
index 9c93c996d3cc73c30d71c1fc47056e4230f35c0f..0000000000000000000000000000000000000000
--- a/spaces/Ikaros521/so-vits-svc-4.0-ikaros2/vdecoder/hifigan/utils.py
+++ /dev/null
@@ -1,68 +0,0 @@
-import glob
-import os
-import matplotlib
-import torch
-from torch.nn.utils import weight_norm
-# matplotlib.use("Agg")
-import matplotlib.pylab as plt
-
-
-def plot_spectrogram(spectrogram):
- fig, ax = plt.subplots(figsize=(10, 2))
- im = ax.imshow(spectrogram, aspect="auto", origin="lower",
- interpolation='none')
- plt.colorbar(im, ax=ax)
-
- fig.canvas.draw()
- plt.close()
-
- return fig
-
-
-def init_weights(m, mean=0.0, std=0.01):
- classname = m.__class__.__name__
- if classname.find("Conv") != -1:
- m.weight.data.normal_(mean, std)
-
-
-def apply_weight_norm(m):
- classname = m.__class__.__name__
- if classname.find("Conv") != -1:
- weight_norm(m)
-
-
-def get_padding(kernel_size, dilation=1):
- return int((kernel_size*dilation - dilation)/2)
-
-
-def load_checkpoint(filepath, device):
- assert os.path.isfile(filepath)
- print("Loading '{}'".format(filepath))
- checkpoint_dict = torch.load(filepath, map_location=device)
- print("Complete.")
- return checkpoint_dict
-
-
-def save_checkpoint(filepath, obj):
- print("Saving checkpoint to {}".format(filepath))
- torch.save(obj, filepath)
- print("Complete.")
-
-
-def del_old_checkpoints(cp_dir, prefix, n_models=2):
- pattern = os.path.join(cp_dir, prefix + '????????')
- cp_list = glob.glob(pattern) # get checkpoint paths
- cp_list = sorted(cp_list)# sort by iter
- if len(cp_list) > n_models: # if more than n_models models are found
- for cp in cp_list[:-n_models]:# delete the oldest models other than lastest n_models
- open(cp, 'w').close()# empty file contents
- os.unlink(cp)# delete file (move to trash when using Colab)
-
-
-def scan_checkpoint(cp_dir, prefix):
- pattern = os.path.join(cp_dir, prefix + '????????')
- cp_list = glob.glob(pattern)
- if len(cp_list) == 0:
- return None
- return sorted(cp_list)[-1]
-
diff --git a/spaces/InpaintAI/Inpaint-Anything/third_party/lama/bin/gen_debug_mask_dataset.py b/spaces/InpaintAI/Inpaint-Anything/third_party/lama/bin/gen_debug_mask_dataset.py
deleted file mode 100644
index 738f76875c82aa412063bb5bff15e69c46f20362..0000000000000000000000000000000000000000
--- a/spaces/InpaintAI/Inpaint-Anything/third_party/lama/bin/gen_debug_mask_dataset.py
+++ /dev/null
@@ -1,61 +0,0 @@
-#!/usr/bin/env python3
-
-import glob
-import os
-
-import PIL.Image as Image
-import cv2
-import numpy as np
-import tqdm
-import shutil
-
-
-from saicinpainting.evaluation.utils import load_yaml
-
-
-def generate_masks_for_img(infile, outmask_pattern, mask_size=200, step=0.5):
- inimg = Image.open(infile)
- width, height = inimg.size
- step_abs = int(mask_size * step)
-
- mask = np.zeros((height, width), dtype='uint8')
- mask_i = 0
-
- for start_vertical in range(0, height - step_abs, step_abs):
- for start_horizontal in range(0, width - step_abs, step_abs):
- mask[start_vertical:start_vertical + mask_size, start_horizontal:start_horizontal + mask_size] = 255
-
- cv2.imwrite(outmask_pattern.format(mask_i), mask)
-
- mask[start_vertical:start_vertical + mask_size, start_horizontal:start_horizontal + mask_size] = 0
- mask_i += 1
-
-
-def main(args):
- if not args.indir.endswith('/'):
- args.indir += '/'
- if not args.outdir.endswith('/'):
- args.outdir += '/'
-
- config = load_yaml(args.config)
-
- in_files = list(glob.glob(os.path.join(args.indir, '**', f'*{config.img_ext}'), recursive=True))
- for infile in tqdm.tqdm(in_files):
- outimg = args.outdir + infile[len(args.indir):]
- outmask_pattern = outimg[:-len(config.img_ext)] + '_mask{:04d}.png'
-
- os.makedirs(os.path.dirname(outimg), exist_ok=True)
- shutil.copy2(infile, outimg)
-
- generate_masks_for_img(infile, outmask_pattern, **config.gen_kwargs)
-
-
-if __name__ == '__main__':
- import argparse
-
- aparser = argparse.ArgumentParser()
- aparser.add_argument('config', type=str, help='Path to config for dataset generation')
- aparser.add_argument('indir', type=str, help='Path to folder with images')
- aparser.add_argument('outdir', type=str, help='Path to folder to store aligned images and masks to')
-
- main(aparser.parse_args())
diff --git a/spaces/Jamkonams/AutoGPT/tests/integration/milvus_memory_tests.py b/spaces/Jamkonams/AutoGPT/tests/integration/milvus_memory_tests.py
deleted file mode 100644
index ec38bf2f72087b5da679d26594ebff97d8a09b19..0000000000000000000000000000000000000000
--- a/spaces/Jamkonams/AutoGPT/tests/integration/milvus_memory_tests.py
+++ /dev/null
@@ -1,57 +0,0 @@
-# sourcery skip: snake-case-functions
-"""Tests for the MilvusMemory class."""
-import random
-import string
-import unittest
-
-from autogpt.config import Config
-from autogpt.memory.milvus import MilvusMemory
-
-try:
-
- class TestMilvusMemory(unittest.TestCase):
- """Tests for the MilvusMemory class."""
-
- def random_string(self, length: int) -> str:
- """Generate a random string of the given length."""
- return "".join(random.choice(string.ascii_letters) for _ in range(length))
-
- def setUp(self) -> None:
- """Set up the test environment."""
- cfg = Config()
- cfg.milvus_addr = "localhost:19530"
- self.memory = MilvusMemory(cfg)
- self.memory.clear()
-
- # Add example texts to the cache
- self.example_texts = [
- "The quick brown fox jumps over the lazy dog",
- "I love machine learning and natural language processing",
- "The cake is a lie, but the pie is always true",
- "ChatGPT is an advanced AI model for conversation",
- ]
-
- for text in self.example_texts:
- self.memory.add(text)
-
- # Add some random strings to test noise
- for _ in range(5):
- self.memory.add(self.random_string(10))
-
- def test_get_relevant(self) -> None:
- """Test getting relevant texts from the cache."""
- query = "I'm interested in artificial intelligence and NLP"
- num_relevant = 3
- relevant_texts = self.memory.get_relevant(query, num_relevant)
-
- print(f"Top {k} relevant texts for the query '{query}':")
- for i, text in enumerate(relevant_texts, start=1):
- print(f"{i}. {text}")
-
- self.assertEqual(len(relevant_texts), k)
- self.assertIn(self.example_texts[1], relevant_texts)
-
-except:
- print(
- "Skipping tests/integration/milvus_memory_tests.py as Milvus is not installed."
- )
diff --git a/spaces/JeffJing/ZookChatBot/steamship/data/package/package_version.py b/spaces/JeffJing/ZookChatBot/steamship/data/package/package_version.py
deleted file mode 100644
index aaac275d9e09ccaf09eb68020fb4b45a6789606b..0000000000000000000000000000000000000000
--- a/spaces/JeffJing/ZookChatBot/steamship/data/package/package_version.py
+++ /dev/null
@@ -1,69 +0,0 @@
-from __future__ import annotations
-
-import json
-from typing import Any, Dict, Type
-
-from pydantic import BaseModel, Field
-
-from steamship.base.client import Client
-from steamship.base.model import CamelModel
-from steamship.base.request import Request
-
-
-class CreatePackageVersionRequest(Request):
- package_id: str = None
- handle: str = None
- type: str = "file"
- hosting_handler: str = None
- # Note: this is a Dict[str, Any] but should be transmitted to the Engine as a JSON string
- config_template: str = None
-
-
-class PackageVersion(CamelModel):
- client: Client = Field(None, exclude=True)
- id: str = None
- package_id: str = None
- handle: str = None
- config_template: Dict[str, Any] = None
-
- @classmethod
- def parse_obj(cls: Type[BaseModel], obj: Any) -> BaseModel:
- # TODO (enias): This needs to be solved at the engine side
- obj = obj["packageVersion"] if "packageVersion" in obj else obj
- return super().parse_obj(obj)
-
- @staticmethod
- def create(
- client: Client,
- package_id: str = None,
- handle: str = None,
- filename: str = None,
- filebytes: bytes = None,
- config_template: Dict[str, Any] = None,
- hosting_handler: str = None,
- ) -> PackageVersion:
-
- if filename is None and filebytes is None:
- raise Exception("Either filename or filebytes must be provided.")
- if filename is not None and filebytes is not None:
- raise Exception("Only either filename or filebytes should be provided.")
-
- if filename is not None:
- with open(filename, "rb") as f:
- filebytes = f.read()
-
- req = CreatePackageVersionRequest(
- handle=handle,
- package_id=package_id,
- config_template=json.dumps(config_template or {}),
- hosting_handler=hosting_handler,
- )
-
- task = client.post(
- "package/version/create",
- payload=req,
- file=("package.zip", filebytes, "multipart/form-data"),
- expect=PackageVersion,
- )
- task.wait()
- return task.output
diff --git a/spaces/JohnSmith9982/ChuanhuChatGPT/locale/extract_locale.py b/spaces/JohnSmith9982/ChuanhuChatGPT/locale/extract_locale.py
deleted file mode 100644
index 32b0924bd6dffe150cb3e481ddadef836b91b83c..0000000000000000000000000000000000000000
--- a/spaces/JohnSmith9982/ChuanhuChatGPT/locale/extract_locale.py
+++ /dev/null
@@ -1,26 +0,0 @@
-import os
-import json
-import re
-
-# Define regular expression patterns
-pattern = r'i18n\((\"{3}.*?\"{3}|\".*?\")\)'
-
-# Load the .py file
-with open('ChuanhuChatbot.py', 'r', encoding='utf-8') as f:
- contents = f.read()
-
-# Load the .py files in the modules folder
-for filename in os.listdir("modules"):
- if filename.endswith(".py"):
- with open(os.path.join("modules", filename), "r", encoding="utf-8") as f:
- contents += f.read()
-
-# Matching with regular expressions
-matches = re.findall(pattern, contents, re.DOTALL)
-
-# Convert to key/value pairs
-data = {match.strip('()"'): '' for match in matches}
-
-# Save as a JSON file
-with open('labels.json', 'w', encoding='utf-8') as f:
- json.dump(data, f, ensure_ascii=False, indent=4)
\ No newline at end of file
diff --git a/spaces/KAIST-Geometric-AI-Lab/salad-demo/salad/utils/meshutil.py b/spaces/KAIST-Geometric-AI-Lab/salad-demo/salad/utils/meshutil.py
deleted file mode 100644
index d56233476052b46019e90a3342e43e8f07760a7a..0000000000000000000000000000000000000000
--- a/spaces/KAIST-Geometric-AI-Lab/salad-demo/salad/utils/meshutil.py
+++ /dev/null
@@ -1,258 +0,0 @@
-from enum import Enum
-
-import numpy as np
-import torch
-import trimesh
-
-from salad.utils import thutil
-
-
-def write_obj(name: str, vertices: np.ndarray, faces: np.ndarray):
- """
- name: filename
- vertices: (V,3)
- faces: (F,3) Assume the mesh is a triangle mesh.
- """
- vertices = thutil.th2np(vertices)
- faces = thutil.th2np(faces).astype(np.uint32)
- fout = open(name, "w")
- for ii in range(len(vertices)):
- fout.write(
- "v "
- + str(vertices[ii, 0])
- + " "
- + str(vertices[ii, 1])
- + " "
- + str(vertices[ii, 2])
- + "\n"
- )
- for ii in range(len(faces)):
- fout.write(
- "f "
- + str(faces[ii, 0] + 1)
- + " "
- + str(faces[ii, 1] + 1)
- + " "
- + str(faces[ii, 2] + 1)
- + "\n"
- )
- fout.close()
-
-
-def write_obj_triangle(name: str, vertices: np.ndarray, triangles: np.ndarray):
- fout = open(name, "w")
- for ii in range(len(vertices)):
- fout.write(
- "v "
- + str(vertices[ii, 0])
- + " "
- + str(vertices[ii, 1])
- + " "
- + str(vertices[ii, 2])
- + "\n"
- )
- for ii in range(len(triangles)):
- fout.write(
- "f "
- + str(triangles[ii, 0] + 1)
- + " "
- + str(triangles[ii, 1] + 1)
- + " "
- + str(triangles[ii, 2] + 1)
- + "\n"
- )
- fout.close()
-
-
-def write_obj_polygon(name: str, vertices: np.ndarray, polygons: np.ndarray):
- fout = open(name, "w")
- for ii in range(len(vertices)):
- fout.write(
- "v "
- + str(vertices[ii][0])
- + " "
- + str(vertices[ii][1])
- + " "
- + str(vertices[ii][2])
- + "\n"
- )
- for ii in range(len(polygons)):
- fout.write("f")
- for jj in range(len(polygons[ii])):
- fout.write(" " + str(polygons[ii][jj] + 1))
- fout.write("\n")
- fout.close()
-
-
-def read_obj(name: str):
- verts = []
- faces = []
- with open(name, "r") as f:
- lines = [line.rstrip() for line in f]
-
- for line in lines:
- if line.startswith("v "):
- verts.append(np.float32(line.split()[1:4]))
- elif line.startswith("f "):
- faces.append(
- np.int32([item.split("/")[0] for item in line.split()[1:4]])
- )
-
- v = np.vstack(verts)
- f = np.vstack(faces) - 1
- return v, f
-
-
-def scene_as_mesh(scene_or_mesh):
- if isinstance(scene_or_mesh, trimesh.Scene):
- if len(scene_or_mesh.geometry) == 0:
- mesh = None
- else:
- mesh = trimesh.util.concatenate(
- tuple(
- trimesh.Trimesh(vertices=g.vertices, faces=g.faces)
- for g in scene_or_mesh.geometry.values()
- if g.faces.shape[1] == 3
- )
- )
- else:
- mesh = scene_or_mesh
-
- return mesh
-
-
-def get_center(verts):
- max_vals = verts.max(0)
- min_vals = verts.min(0)
- center = (max_vals + min_vals) / 2
- return center
-
-
-def to_center(verts):
- verts -= get_center(verts)[None, :]
- return verts
-
-
-def get_offset_and_scale(verts, radius=1.0):
- verts = thutil.th2np(verts)
- verts = verts.copy()
-
- offset = get_center(verts)[None, :]
- verts -= offset
- scale = 1 / np.linalg.norm(verts, axis=1).max() * radius
-
- return offset, scale
-
-
-def normalize_mesh(mesh: trimesh.Trimesh):
- # unit cube normalization
- v, f = np.array(mesh.vertices), np.array(mesh.faces)
- maxv, minv = np.max(v, 0), np.min(v, 0)
- offset = minv
- v = v - offset
- scale = np.sqrt(np.sum((maxv - minv) ** 2))
- v = v / scale
- normed_mesh = trimesh.Trimesh(vertices=v, faces=f, process=False)
- return dict(mesh=normed_mesh, offset=offset, scale=scale)
-
-
-def normalize_scene(scene: trimesh.Scene):
- mesh_merged = scene_as_mesh(scene)
-
- out = normalize_mesh(mesh_merged)
- offset = out["offset"]
- scale = out["scale"]
-
- submesh_normalized_list = []
- for i, submesh in enumerate(list(scene.geometry.values())):
- v, f = np.array(submesh.vertices), np.array(submesh.faces)
- v = v - offset
- v = v / scale
- submesh_normalized_list.append(trimesh.Trimesh(v, f))
-
- return trimesh.Scene(submesh_normalized_list)
-
-
-class SampleBy(Enum):
- AREAS = 0
- FACES = 1
- HYB = 2
-
-
-def get_faces_normals(mesh):
- if type(mesh) is not torch.Tensor:
- vs, faces = mesh
- vs_faces = vs[faces]
- else:
- vs_faces = mesh
- if vs_faces.shape[-1] == 2:
- vs_faces = torch.cat(
- (
- vs_faces,
- torch.zeros(
- *vs_faces.shape[:2], 1, dtype=vs_faces.dtype, device=vs_faces.device
- ),
- ),
- dim=2,
- )
- face_normals = torch.cross(
- vs_faces[:, 1, :] - vs_faces[:, 0, :], vs_faces[:, 2, :] - vs_faces[:, 1, :]
- )
- return face_normals
-
-
-def compute_face_areas(mesh):
- face_normals = get_faces_normals(mesh)
- face_areas = torch.norm(face_normals, p=2, dim=1)
- face_areas_ = face_areas.clone()
- face_areas_[torch.eq(face_areas_, 0)] = 1
- face_normals = face_normals / face_areas_[:, None]
- face_areas = 0.5 * face_areas
- return face_areas, face_normals
-
-
-def sample_uvw(shape, device):
- u, v = torch.rand(*shape, device=device), torch.rand(*shape, device=device)
- mask = (u + v).gt(1)
- u[mask], v[mask] = -u[mask] + 1, -v[mask] + 1
- w = -u - v + 1
- uvw = torch.stack([u, v, w], dim=len(shape))
- return uvw
-
-
-def sample_on_mesh(mesh, num_samples: int, face_areas=None, sample_s=SampleBy.HYB):
- vs, faces = mesh
- if faces is None: # sample from pc
- uvw = None
- if vs.shape[0] < num_samples:
- chosen_faces_inds = torch.arange(vs.shape[0])
- else:
- chosen_faces_inds = torch.argsort(torch.rand(vs.shape[0]))[:num_samples]
- samples = vs[chosen_faces_inds]
- else:
- weighted_p = []
- if sample_s == SampleBy.AREAS or sample_s == SampleBy.HYB:
- if face_areas is None:
- face_areas, _ = compute_face_areas(mesh)
- face_areas[torch.isnan(face_areas)] = 0
- weighted_p.append(face_areas / face_areas.sum())
- if sample_s == SampleBy.FACES or sample_s == SampleBy.HYB:
- weighted_p.append(torch.ones(mesh[1].shape[0], device=mesh[0].device))
- chosen_faces_inds = [
- torch.multinomial(weights, num_samples // len(weighted_p), replacement=True)
- for weights in weighted_p
- ]
- if sample_s == SampleBy.HYB:
- chosen_faces_inds = torch.cat(chosen_faces_inds, dim=0)
- chosen_faces = faces[chosen_faces_inds]
- uvw = sample_uvw([num_samples], vs.device)
- samples = torch.einsum("sf,sfd->sd", uvw, vs[chosen_faces])
- return samples, chosen_faces_inds, uvw
-
-
-def repair_normals(v, f):
- mesh = trimesh.Trimesh(v, f)
- trimesh.repair.fix_normals(mesh)
- v = mesh.vertices
- f = np.asarray(mesh.faces)
- return v, f
diff --git a/spaces/KPCGD/bingo/src/components/chat.tsx b/spaces/KPCGD/bingo/src/components/chat.tsx
deleted file mode 100644
index a37ab1cc96ca2e6bfd9acbe313a8d946bfd5c3d4..0000000000000000000000000000000000000000
--- a/spaces/KPCGD/bingo/src/components/chat.tsx
+++ /dev/null
@@ -1,93 +0,0 @@
-'use client'
-
-import { useCallback, useEffect, useMemo, useState } from 'react'
-import { useAtom } from 'jotai'
-import Image from 'next/image'
-import { cn } from '@/lib/utils'
-import { ChatList } from '@/components/chat-list'
-import { ChatPanel } from '@/components/chat-panel'
-import { WelcomeScreen } from '@/components/welcome-screen'
-import { ChatScrollAnchor } from '@/components/chat-scroll-anchor'
-import { ToneSelector } from './tone-selector'
-import { ChatHeader } from './chat-header'
-import { ChatSuggestions } from './chat-suggestions'
-import { bingConversationStyleAtom } from '@/state'
-import { ButtonScrollToBottom } from '@/components/button-scroll-to-bottom'
-import StopIcon from '@/assets/images/stop.svg'
-import { useBing } from '@/lib/hooks/use-bing'
-import { ChatMessageModel } from '@/lib/bots/bing/types'
-import { ChatNotification } from './chat-notification'
-import { Settings } from './settings'
-import { ChatHistory } from './chat-history'
-
-export type ChatProps = React.ComponentProps<'div'> & { initialMessages?: ChatMessageModel[] }
-
-export default function Chat({ className }: ChatProps) {
-
- const [bingStyle, setBingStyle] = useAtom(bingConversationStyleAtom)
- const {
- messages,
- sendMessage,
- resetConversation,
- stopGenerating,
- setInput,
- bot,
- input,
- generating,
- isSpeaking,
- uploadImage,
- attachmentList,
- setAttachmentList,
- } = useBing()
-
- useEffect(() => {
- window.scrollTo({
- top: document.body.offsetHeight,
- behavior: 'smooth'
- })
- }, [])
-
- return (
-
-
-
-
-
-
- {messages.length ? (
- <>
-
-
-
-
-
- {generating ? (
-
-
-
- 停止响应
-
-
- ) : null}
- >
- ) : null}
-
-
-
-
- )
-}
diff --git a/spaces/Kevin676/AutoGPT/autogpt/memory/no_memory.py b/spaces/Kevin676/AutoGPT/autogpt/memory/no_memory.py
deleted file mode 100644
index 0371e96ae89f5eb88dae019a66351a229596ed7a..0000000000000000000000000000000000000000
--- a/spaces/Kevin676/AutoGPT/autogpt/memory/no_memory.py
+++ /dev/null
@@ -1,73 +0,0 @@
-"""A class that does not store any data. This is the default memory provider."""
-from __future__ import annotations
-
-from typing import Any
-
-from autogpt.memory.base import MemoryProviderSingleton
-
-
-class NoMemory(MemoryProviderSingleton):
- """
- A class that does not store any data. This is the default memory provider.
- """
-
- def __init__(self, cfg):
- """
- Initializes the NoMemory provider.
-
- Args:
- cfg: The config object.
-
- Returns: None
- """
- pass
-
- def add(self, data: str) -> str:
- """
- Adds a data point to the memory. No action is taken in NoMemory.
-
- Args:
- data: The data to add.
-
- Returns: An empty string.
- """
- return ""
-
- def get(self, data: str) -> list[Any] | None:
- """
- Gets the data from the memory that is most relevant to the given data.
- NoMemory always returns None.
-
- Args:
- data: The data to compare to.
-
- Returns: None
- """
- return None
-
- def clear(self) -> str:
- """
- Clears the memory. No action is taken in NoMemory.
-
- Returns: An empty string.
- """
- return ""
-
- def get_relevant(self, data: str, num_relevant: int = 5) -> list[Any] | None:
- """
- Returns all the data in the memory that is relevant to the given data.
- NoMemory always returns None.
-
- Args:
- data: The data to compare to.
- num_relevant: The number of relevant data to return.
-
- Returns: None
- """
- return None
-
- def get_stats(self):
- """
- Returns: An empty dictionary as there are no stats in NoMemory.
- """
- return {}
diff --git a/spaces/Kevin676/AutoGPT/tests/browse_tests.py b/spaces/Kevin676/AutoGPT/tests/browse_tests.py
deleted file mode 100644
index f896e7dd751b1b661d5e989909448b7e182eab69..0000000000000000000000000000000000000000
--- a/spaces/Kevin676/AutoGPT/tests/browse_tests.py
+++ /dev/null
@@ -1,26 +0,0 @@
-import os
-import sys
-import unittest
-
-from bs4 import BeautifulSoup
-
-sys.path.append(os.path.abspath("../scripts"))
-
-from browse import extract_hyperlinks
-
-
-class TestBrowseLinks(unittest.TestCase):
- def test_extract_hyperlinks(self):
- body = """
-
- Google
- Foo
- Some other crap
-
- """
- soup = BeautifulSoup(body, "html.parser")
- links = extract_hyperlinks(soup, "http://example.com")
- self.assertEqual(
- links,
- [("Google", "https://google.com"), ("Foo", "http://example.com/foo.html")],
- )
diff --git a/spaces/KevinQHLin/UniVTG/model/univtg_ablation.py b/spaces/KevinQHLin/UniVTG/model/univtg_ablation.py
deleted file mode 100644
index a3fb06a293c0a7130715d53cecc9b98406d70fdf..0000000000000000000000000000000000000000
--- a/spaces/KevinQHLin/UniVTG/model/univtg_ablation.py
+++ /dev/null
@@ -1,474 +0,0 @@
-import pdb
-import torch
-import torch.nn.functional as F
-from torch import nn
-import numpy as np
-
-from model.transformer_encoder_droppath import build_transformer
-from model.matcher import build_matcher
-from model.position_encoding import build_position_encoding
-from utils.span_utils import generalized_temporal_iou, span_cxw_to_xx
-
-def init_weights(module):
- if isinstance(module, (nn.Linear, nn.Embedding)):
- module.weight.data.normal_(mean=0.0, std=0.02)
- elif isinstance(module, nn.LayerNorm):
- module.bias.data.zero_()
- module.weight.data.fill_(1.0)
-
- if isinstance(module, nn.Linear) and module.bias is not None:
- module.bias.data.zero_()
-
-def mask_logits(inputs, mask, mask_value=-1e30):
- mask = mask.type(torch.float32)
- return inputs + (1.0 - mask) * mask_value
-
-def sim_matrix(a, b, eps=1e-8):
- """
- added eps for numerical stability
- """
- a_n, b_n = a.norm(dim=1)[:, None], b.norm(dim=1)[:, None]
- a_norm = a / torch.max(a_n, eps * torch.ones_like(a_n))
- b_norm = b / torch.max(b_n, eps * torch.ones_like(b_n))
- sim_mt = torch.mm(a_norm, b_norm.transpose(0, 1))
- return sim_mt
-
-class WeightedPool(nn.Module):
- def __init__(self, dim):
- super(WeightedPool, self).__init__()
- weight = torch.empty(dim, 1)
- nn.init.xavier_uniform_(weight)
- self.weight = nn.Parameter(weight, requires_grad=True)
-
- def forward(self, x, mask):
- alpha = torch.tensordot(x, self.weight, dims=1) # shape = (batch_size, seq_length, 1)
- alpha = mask_logits(alpha, mask=mask.unsqueeze(2))
- alphas = nn.Softmax(dim=1)(alpha)
- pooled_x = torch.matmul(x.transpose(1, 2), alphas) # (batch_size, dim, 1)
- pooled_x = pooled_x.squeeze(2)
- return pooled_x
-
-class Model(nn.Module):
- """ This is the UniVTG module that performs moment localization. """
-
- def __init__(self, transformer, position_embed, txt_position_embed, txt_dim, vid_dim,
- input_dropout, aux_loss=False,
- max_v_l=75, span_loss_type="l1", use_txt_pos=False, n_input_proj=2):
- """ Initializes the model.
- Parameters:
- transformer: torch module of the transformer architecture. See transformer.py
- position_embed: torch module of the position_embedding, See position_encoding.py
- txt_position_embed: position_embedding for text
- txt_dim: int, text query input dimension
- vid_dim: int, video feature input dimension
- max_v_l: int, maximum #clips in videos
- span_loss_type: str, one of [l1, ce]
- l1: (center-x, width) regression.
- ce: (st_idx, ed_idx) classification.
- # foreground_thd: float, intersection over prediction >= foreground_thd: labeled as foreground
- # background_thd: float, intersection over prediction <= background_thd: labeled background
- """
- super().__init__()
- self.transformer = transformer
- self.position_embed = position_embed
- self.txt_position_embed = txt_position_embed
- hidden_dim = transformer.d_model
- self.span_loss_type = span_loss_type
- self.max_v_l = max_v_l
- span_pred_dim = 2 if span_loss_type == "l1" else max_v_l * 2
-
- self.token_type_embeddings = nn.Embedding(2, hidden_dim)
- self.token_type_embeddings.apply(init_weights)
-
- # Conv projector
- self.span_embed = Conv(hidden_dim, hidden_dim, span_pred_dim, 3, kernel_size=3)
- self.class_embed = Conv(hidden_dim, hidden_dim, 1, 3, kernel_size=3) # 0: background, 1: foreground
-
- self.use_txt_pos = use_txt_pos
- self.n_input_proj = n_input_proj
- relu_args = [True] * 3
- relu_args[n_input_proj-1] = False
- self.input_txt_proj = nn.Sequential(*[
- LinearLayer(txt_dim, hidden_dim, layer_norm=True, dropout=input_dropout, relu=relu_args[0]),
- LinearLayer(hidden_dim, hidden_dim, layer_norm=True, dropout=input_dropout, relu=relu_args[1]),
- LinearLayer(hidden_dim, hidden_dim, layer_norm=True, dropout=input_dropout, relu=relu_args[2])
- ][:n_input_proj])
- self.input_vid_proj = nn.Sequential(*[
- LinearLayer(vid_dim, hidden_dim, layer_norm=True, dropout=input_dropout, relu=relu_args[0]),
- LinearLayer(hidden_dim, hidden_dim, layer_norm=True, dropout=input_dropout, relu=relu_args[1]),
- LinearLayer(hidden_dim, hidden_dim, layer_norm=True, dropout=input_dropout, relu=relu_args[2])
- ][:n_input_proj])
-
- # MLP Projector
- self.weightedpool = WeightedPool(hidden_dim)
-
- def forward(self, src_txt, src_txt_mask, src_vid, src_vid_mask, src_cls=None, src_cls_mask=None):
- bs = src_vid.shape[0]
- src_vid = self.input_vid_proj(src_vid)
- src_txt = self.input_txt_proj(src_txt)
- if src_cls is not None:
- src_cls = self.input_txt_proj(src_cls)
-
- # type token.
- src_vid = src_vid + self.token_type_embeddings(torch.full_like(src_vid_mask.long(), 1))
- src_txt = src_txt + self.token_type_embeddings(torch.zeros_like(src_txt_mask.long()))
- if src_cls is not None:
- src_cls = src_cls + self.token_type_embeddings(torch.zeros_like(src_cls_mask.long()))
-
- src = torch.cat([src_vid, src_txt], dim=1) # (bsz, L_vid+L_txt, d)
- mask = torch.cat([src_vid_mask, src_txt_mask], dim=1).bool() # (bsz, L_vid+L_txt)
-
- pos_vid = self.position_embed(src_vid, src_vid_mask) # (bsz, L_vid, d)
- pos_txt = self.txt_position_embed(src_txt) if self.use_txt_pos else torch.zeros_like(src_txt) # (bsz, L_txt, d)
- pos = torch.cat([pos_vid, pos_txt], dim=1)
-
- memory = self.transformer(src, ~mask, pos)
- vid_mem = memory[:, :src_vid.shape[1], :] # (bsz, L_vid, d)
-
- outputs_class = self.class_embed(vid_mem).sigmoid() # (#layers, batch_size, #queries, #classes)
- outputs_coord = self.span_embed(vid_mem) # (#layers, bsz, #queries, 2 or max_v_l * 2)
-
- if self.span_loss_type == "l1":
- outputs_coord = outputs_coord.sigmoid()
- idx_mask = torch.tensor((-1, 1)).unsqueeze(0).unsqueeze(0).cuda()
- idx_mask = idx_mask.repeat(outputs_coord.shape[0], outputs_coord.shape[1], 1)
- outputs_coord = outputs_coord * idx_mask
- else:
- raise NotImplementedError
-
- out = {'pred_logits': outputs_class, 'pred_spans': outputs_coord,
- 'src_vid_mask': src_vid_mask}
-
- vid_mem_proj = src_vid
-
- # word-level -> sentence-level
- txt_mem_proj = self.weightedpool(src_txt, src_txt_mask).unsqueeze(1)
- sim = F.cosine_similarity(vid_mem_proj, txt_mem_proj, dim=-1) + (src_vid_mask + 1e-45).log()
-
- out["vid_mem_proj"] = vid_mem_proj
- out["txt_mem_proj"] = txt_mem_proj
- if src_cls is not None:
- cls_mem_proj = self.weightedpool(src_cls, src_cls_mask)
- out["cls_mem_proj"] = cls_mem_proj
- out["saliency_scores"] = sim
- return out
-
-class SetCriterion(nn.Module):
- """ This class computes the loss for DETR.
- The process happens in two steps:
- 1) we compute hungarian assignment between ground truth boxes and the outputs of the model
- 2) we supervise each pair of matched ground-truth / prediction (supervise class and box)
- """
-
- def __init__(self, matcher, weight_dict, eos_coef, losses, temperature, span_loss_type, max_v_l,
- saliency_margin=1):
- """ Create the criterion.
- Parameters:
- matcher: module able to compute a matching between targets and proposals
- weight_dict: dict containing as key the names of the losses and as values their relative weight.
- eos_coef: relative classification weight applied to the no-object category
- losses: list of all the losses to be applied. See get_loss for list of available losses.
- temperature: float, temperature for NCE loss
- span_loss_type: str, [l1, ce]
- max_v_l: int,
- saliency_margin: float
- """
- super().__init__()
- self.matcher = matcher
- self.weight_dict = weight_dict
- self.losses = losses
- self.temperature = temperature
- self.span_loss_type = span_loss_type
- self.max_v_l = max_v_l
- self.saliency_margin = saliency_margin
- self.temperature = 0.07
-
- # foreground and background classification
- self.foreground_label = 0
- self.background_label = 1
- self.eos_coef = eos_coef
- empty_weight = torch.ones(2)
- empty_weight[-1] = self.eos_coef # lower weight for background (index 1, foreground index 0)
- self.register_buffer('empty_weight', empty_weight)
-
- def loss_spans(self, outputs, targets, indices):
- assert 'pred_spans' in outputs
-
- start_spans = targets['timestamp']
- pred_spans = outputs['pred_spans']
- src_spans = start_spans + pred_spans
- gt_spans = targets['span_labels_nn']
-
- mask = targets['timestamp_mask'].bool()
- mask_full = targets['timestamp_mask'].unsqueeze(2).repeat(1, 1, 2)
- mask_valid = targets['timestamp_window'].bool()
- mask_valid_full = targets['timestamp_window'].unsqueeze(2).repeat(1, 1, 2)
-
- weight_abalation_b = targets['weight_ablation'][:,0].unsqueeze(-1)
- if weight_abalation_b.sum() == 0:
- return {"loss_f": torch.tensor(0).cuda(), "loss_g": torch.tensor(0).cuda()}
-
- mask_valid = (mask_valid * weight_abalation_b).bool()
- mask_valid_full = (mask_valid_full * weight_abalation_b.unsqueeze(-1)).bool()
-
- loss_span = F.smooth_l1_loss(src_spans, gt_spans, reduction='none') * mask_valid_full
- loss_giou = 1 - torch.diag(generalized_temporal_iou(src_spans[mask_valid], gt_spans[mask_valid]))
-
- losses = {}
- losses['loss_b'] = loss_span.sum() / mask_valid.sum()
- losses['loss_g'] = loss_giou.mean()
- return losses
-
- def loss_labels(self, outputs, targets, indices, log=True):
- src_logits = outputs['pred_logits'].squeeze(-1) # (batch_size, #queries, #classes=2)
- mask = targets['timestamp_mask'].bool()
- mask_valid = targets['timestamp_window'].bool()
- target_classes = torch.full(src_logits.shape[:2], 0, dtype=torch.int64, device=src_logits.device) # (batch_size, #queries)
- target_classes[mask_valid] = 1
- # target_classes = targets['timestamp_window'] # soft cls.
- target_classes.float()
- # pdb.set_trace()
-
- weights = torch.zeros_like(target_classes).float()
- weights[mask] = self.empty_weight[1]
- weights[mask_valid] = self.empty_weight[0]
-
- loss_ce = F.binary_cross_entropy(src_logits, target_classes.float(), weight=weights, reduction="none") * mask
-
- weight_abalation_f = targets['weight_ablation'][:,2].unsqueeze(-1)
- if weight_abalation_f.sum() == 0:
- return {"loss_f": torch.tensor(0).cuda()}
-
- mask = mask * weight_abalation_f
- loss_ce = loss_ce * weight_abalation_f
- return {"loss_f": loss_ce.sum() / mask.sum()}
- # return {"loss_f": loss_ce.sum() / (1 + mask_valid.sum())}
-
- def loss_saliency(self, outputs, targets, indices, log=True):
- """higher scores for positive clips"""
- if "saliency_pos_labels" not in targets:
- return {"loss_s_inter": 0., "loss_s_intra": 0.}
- saliency_scores = targets["saliency_scores"]
- if saliency_scores.sum() == 0:
- return {"loss_s_inter": 0., "loss_s_intra": 0.}
-
- # * inter-vid mode
- vid_mem_proj = outputs["vid_mem_proj"]
- pos_indices = targets["saliency_pos_labels"][:,0].long() # (N, #pairs)
- batch_indices = torch.arange(len(vid_mem_proj)).to(vid_mem_proj.device)
-
- vid_feats = vid_mem_proj[batch_indices, pos_indices]
- txt_feats = outputs["txt_mem_proj"].squeeze(1)
- sim = sim_matrix(vid_feats, txt_feats)
-
- i_logsm = F.log_softmax(sim / self.temperature, dim=1)
- j_logsm = F.log_softmax(sim.t() /self.temperature, dim=1)
-
- # sum over positives
- idiag = torch.diag(i_logsm)
- jdiag = torch.diag(j_logsm)
-
- weight_abalation_s = targets['weight_ablation'][:,3].bool()
- if weight_abalation_s.sum() == 0:
- return {"loss_s_inter": torch.tensor(0).cuda(),
- "loss_s_intra": torch.tensor(0).cuda()}
-
- _idiag = idiag[weight_abalation_s]
- _jdiag = jdiag[weight_abalation_s]
-
- loss_i = _idiag.sum() / len(_idiag)
- loss_j = _jdiag.sum() / len(_jdiag)
-
- loss_saliency_inter = - loss_i - loss_j
-
- # * intra-vid mode
- mask = targets['timestamp_mask']
- selected_scores = saliency_scores[batch_indices, pos_indices].unsqueeze(-1)
- neg_indices_in = (saliency_scores < selected_scores)
- neg_indices_in[batch_indices, pos_indices] = True
- mask_invalid = neg_indices_in * mask.bool()
-
- sim_in = F.cosine_similarity(vid_mem_proj, txt_feats.unsqueeze(1), dim=-1)
- sim_in = sim_in + (mask_invalid + 1e-45).log()
- logsm_in_i = F.log_softmax(sim_in / self.temperature, dim=1)
- logsm_in_j = F.log_softmax(sim_in.t() / self.temperature, dim=1)
-
- pos_logsm_in_i = logsm_in_i[batch_indices, pos_indices]
- pos_logsm_in_j = logsm_in_j[pos_indices, batch_indices]
- _pos_logsm_in_i = pos_logsm_in_i[weight_abalation_s]
- _pos_logsm_in_j = pos_logsm_in_j[weight_abalation_s]
-
- loss_in_i = _pos_logsm_in_i.sum() / len(_pos_logsm_in_i)
- loss_in_j = _pos_logsm_in_j.sum() / len(_pos_logsm_in_j)
-
- loss_saliency_intra = - loss_in_i - loss_in_j
-
- return {"loss_s_inter": loss_saliency_inter, "loss_s_intra": loss_saliency_intra}
-
- def loss_saliency_cls(self, outputs, targets, indices, log=True):
- """higher scores for positive clips"""
- if "saliency_pos_labels" not in targets:
- return {"loss_s_inter": 0., "loss_s_intra": 0.}
- saliency_scores = targets["saliency_scores"]
- if saliency_scores.sum() == 0:
- return {"loss_s_inter": 0., "loss_s_intra": 0.}
-
- # * inter-vid mode
- vid_mem_proj = outputs["vid_mem_proj"]
- pos_indices = targets["saliency_pos_labels"][:,0].long() # (N, #pairs)
- batch_indices = torch.arange(len(vid_mem_proj)).to(vid_mem_proj.device)
-
- vid_feats = vid_mem_proj[batch_indices, pos_indices]
- txt_feats = outputs["txt_mem_proj"].squeeze(1)
- sim = sim_matrix(vid_feats, txt_feats)
-
- i_logsm = F.log_softmax(sim / self.temperature, dim=1)
- j_logsm = F.log_softmax(sim.t() /self.temperature, dim=1)
-
- # sum over positives
- idiag = torch.diag(i_logsm)
- jdiag = torch.diag(j_logsm)
- loss_i = idiag.sum() / len(idiag)
- loss_j = jdiag.sum() / len(jdiag)
-
- loss_saliency_inter = - loss_i - loss_j
-
- # * intra-vid mode
- if 'cls_idx' not in targets.keys(): # eval
- return {"loss_s_inter": loss_saliency_inter}
-
- cls_indices = targets['cls_idx'].bool()
- cls_feats = outputs["cls_mem_proj"].squeeze(1)
- sim_cls = sim_matrix(vid_feats, cls_feats)
-
- i_logsm_cls = F.log_softmax(sim_cls / self.temperature, dim=1)
- idiag_cls = i_logsm_cls[cls_indices]
- loss_cls_i = idiag_cls.sum() / len(idiag_cls)
-
- loss_saliency_intra = - loss_cls_i
-
- return {"loss_s_inter": loss_saliency_inter, "loss_s_intra": loss_saliency_intra}
-
- def get_loss(self, loss, outputs, targets, indices, **kwargs):
- loss_map = {
- "spans": self.loss_spans,
- "labels": self.loss_labels,
- "saliency": self.loss_saliency,
- "saliency_cls": self.loss_saliency_cls,
- }
- assert loss in loss_map, f'do you really want to compute {loss} loss?'
- return loss_map[loss](outputs, targets, indices, **kwargs)
-
- def forward(self, outputs, targets, hl_only=False):
- """ This performs the loss computation.
- Parameters:
- outputs: dict of tensors, see the output specification of the model for the format
- targets: list of dicts, such that len(targets) == batch_size.
- The expected keys in each dict depends on the losses applied, see each loss' doc
- """
- indices = None
- # Compute all the requested losses
- losses = {}
- for loss in self.losses:
- losses.update(self.get_loss(loss, outputs, targets, indices))
-
- return losses
-
-class MLP(nn.Module):
- """ Very simple multi-layer perceptron (also called FFN)"""
-
- def __init__(self, input_dim, hidden_dim, output_dim, num_layers):
- super().__init__()
- self.num_layers = num_layers
- h = [hidden_dim] * (num_layers - 1)
- self.layers = nn.ModuleList(nn.Linear(n, k) for n, k in zip([input_dim] + h, h + [output_dim]))
-
- def forward(self, x):
- for i, layer in enumerate(self.layers):
- x = F.relu(layer(x)) if i < self.num_layers - 1 else layer(x)
- return x
-
-class Conv(nn.Module):
- """ Very simple multi-layer perceptron (also called FFN)"""
-
- def __init__(self, input_dim, hidden_dim, output_dim, num_layers, kernel_size):
- super().__init__()
- self.num_layers = num_layers
- h = [hidden_dim] * (num_layers - 1)
- # self.layers = nn.ModuleList(nn.Linear(n, k) for n, k in zip([input_dim] + h, h + [output_dim]))
- self.layers = nn.ModuleList(
- nn.Conv1d(n, k, kernel_size=kernel_size, stride=1, padding=kernel_size//2, dilation=1, groups=1, bias=True, padding_mode='zeros')
- for n, k in zip([input_dim] + h, h + [output_dim]))
- def forward(self, x):
- x = x.permute(0,2,1)
- for i, layer in enumerate(self.layers):
- x = F.relu(layer(x)) if i < self.num_layers - 1 else layer(x)
- return x.permute(0, 2, 1)
-
-class LinearLayer(nn.Module):
- """linear layer configurable with layer normalization, dropout, ReLU."""
-
- def __init__(self, in_hsz, out_hsz, layer_norm=True, dropout=0.1, relu=True):
- super(LinearLayer, self).__init__()
- self.relu = relu
- self.layer_norm = layer_norm
- if layer_norm:
- self.LayerNorm = nn.LayerNorm(in_hsz)
- layers = [
- nn.Dropout(dropout),
- nn.Linear(in_hsz, out_hsz)
- ]
- self.net = nn.Sequential(*layers)
-
- def forward(self, x):
- """(N, L, D)"""
- if self.layer_norm:
- x = self.LayerNorm(x)
- x = self.net(x)
- if self.relu:
- x = F.relu(x, inplace=True)
- return x # (N, L, D)
-
-
-def build_model(args):
- device = torch.device(args.device)
-
- transformer = build_transformer(args)
- position_embedding, txt_position_embedding = build_position_encoding(args)
-
- model = Model(
- transformer,
- position_embedding,
- txt_position_embedding,
- txt_dim=args.t_feat_dim,
- vid_dim=args.v_feat_dim,
- input_dropout=args.input_dropout,
- span_loss_type=args.span_loss_type,
- use_txt_pos=args.use_txt_pos,
- n_input_proj=args.n_input_proj,
- )
-
- matcher = build_matcher(args)
- weight_dict = {"loss_b": args.b_loss_coef,
- "loss_g": args.g_loss_coef,
- "loss_f": args.f_loss_coef,
- "loss_s_intra": args.s_loss_intra_coef,
- "loss_s_inter": args.s_loss_inter_coef}
-
- if args.dset_type in ['mr', 'vlp']:
- if 'tal' not in args.train_path:
- losses = ['spans', 'labels', 'saliency']
- else:
- losses = ['spans', 'labels', 'saliency_cls']
- elif args.dset_type in ['hl', 'vs']:
- losses = ['labels', 'saliency']
-
- criterion = SetCriterion(
- matcher=matcher,
- weight_dict=weight_dict, losses=losses,
- eos_coef=args.eos_coef, temperature=args.temperature,
- span_loss_type=args.span_loss_type, max_v_l=args.max_v_l,
- saliency_margin=args.saliency_margin,
- )
- criterion.to(device)
- return model, criterion
\ No newline at end of file
diff --git a/spaces/KyanChen/RSPrompter/mmdet/datasets/coco_panoptic.py b/spaces/KyanChen/RSPrompter/mmdet/datasets/coco_panoptic.py
deleted file mode 100644
index 33d4189e6c4a86648d8802f06f660139ebef4878..0000000000000000000000000000000000000000
--- a/spaces/KyanChen/RSPrompter/mmdet/datasets/coco_panoptic.py
+++ /dev/null
@@ -1,287 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import os.path as osp
-from typing import Callable, List, Optional, Sequence, Union
-
-from mmdet.registry import DATASETS
-from .api_wrappers import COCOPanoptic
-from .coco import CocoDataset
-
-
-@DATASETS.register_module()
-class CocoPanopticDataset(CocoDataset):
- """Coco dataset for Panoptic segmentation.
-
- The annotation format is shown as follows. The `ann` field is optional
- for testing.
-
- .. code-block:: none
-
- [
- {
- 'filename': f'{image_id:012}.png',
- 'image_id':9
- 'segments_info':
- [
- {
- 'id': 8345037, (segment_id in panoptic png,
- convert from rgb)
- 'category_id': 51,
- 'iscrowd': 0,
- 'bbox': (x1, y1, w, h),
- 'area': 24315
- },
- ...
- ]
- },
- ...
- ]
-
- Args:
- ann_file (str): Annotation file path. Defaults to ''.
- metainfo (dict, optional): Meta information for dataset, such as class
- information. Defaults to None.
- data_root (str, optional): The root directory for ``data_prefix`` and
- ``ann_file``. Defaults to None.
- data_prefix (dict, optional): Prefix for training data. Defaults to
- ``dict(img=None, ann=None, seg=None)``. The prefix ``seg`` which is
- for panoptic segmentation map must be not None.
- filter_cfg (dict, optional): Config for filter data. Defaults to None.
- indices (int or Sequence[int], optional): Support using first few
- data in annotation file to facilitate training/testing on a smaller
- dataset. Defaults to None which means using all ``data_infos``.
- serialize_data (bool, optional): Whether to hold memory using
- serialized objects, when enabled, data loader workers can use
- shared RAM from master process instead of making a copy. Defaults
- to True.
- pipeline (list, optional): Processing pipeline. Defaults to [].
- test_mode (bool, optional): ``test_mode=True`` means in test phase.
- Defaults to False.
- lazy_init (bool, optional): Whether to load annotation during
- instantiation. In some cases, such as visualization, only the meta
- information of the dataset is needed, which is not necessary to
- load annotation file. ``Basedataset`` can skip load annotations to
- save time by set ``lazy_init=False``. Defaults to False.
- max_refetch (int, optional): If ``Basedataset.prepare_data`` get a
- None img. The maximum extra number of cycles to get a valid
- image. Defaults to 1000.
- """
-
- METAINFO = {
- 'classes':
- ('person', 'bicycle', 'car', 'motorcycle', 'airplane', 'bus', 'train',
- 'truck', 'boat', 'traffic light', 'fire hydrant', 'stop sign',
- 'parking meter', 'bench', 'bird', 'cat', 'dog', 'horse', 'sheep',
- 'cow', 'elephant', 'bear', 'zebra', 'giraffe', 'backpack', 'umbrella',
- 'handbag', 'tie', 'suitcase', 'frisbee', 'skis', 'snowboard',
- 'sports ball', 'kite', 'baseball bat', 'baseball glove', 'skateboard',
- 'surfboard', 'tennis racket', 'bottle', 'wine glass', 'cup', 'fork',
- 'knife', 'spoon', 'bowl', 'banana', 'apple', 'sandwich', 'orange',
- 'broccoli', 'carrot', 'hot dog', 'pizza', 'donut', 'cake', 'chair',
- 'couch', 'potted plant', 'bed', 'dining table', 'toilet', 'tv',
- 'laptop', 'mouse', 'remote', 'keyboard', 'cell phone', 'microwave',
- 'oven', 'toaster', 'sink', 'refrigerator', 'book', 'clock', 'vase',
- 'scissors', 'teddy bear', 'hair drier', 'toothbrush', 'banner',
- 'blanket', 'bridge', 'cardboard', 'counter', 'curtain', 'door-stuff',
- 'floor-wood', 'flower', 'fruit', 'gravel', 'house', 'light',
- 'mirror-stuff', 'net', 'pillow', 'platform', 'playingfield',
- 'railroad', 'river', 'road', 'roof', 'sand', 'sea', 'shelf', 'snow',
- 'stairs', 'tent', 'towel', 'wall-brick', 'wall-stone', 'wall-tile',
- 'wall-wood', 'water-other', 'window-blind', 'window-other',
- 'tree-merged', 'fence-merged', 'ceiling-merged', 'sky-other-merged',
- 'cabinet-merged', 'table-merged', 'floor-other-merged',
- 'pavement-merged', 'mountain-merged', 'grass-merged', 'dirt-merged',
- 'paper-merged', 'food-other-merged', 'building-other-merged',
- 'rock-merged', 'wall-other-merged', 'rug-merged'),
- 'thing_classes':
- ('person', 'bicycle', 'car', 'motorcycle', 'airplane', 'bus', 'train',
- 'truck', 'boat', 'traffic light', 'fire hydrant', 'stop sign',
- 'parking meter', 'bench', 'bird', 'cat', 'dog', 'horse', 'sheep',
- 'cow', 'elephant', 'bear', 'zebra', 'giraffe', 'backpack', 'umbrella',
- 'handbag', 'tie', 'suitcase', 'frisbee', 'skis', 'snowboard',
- 'sports ball', 'kite', 'baseball bat', 'baseball glove', 'skateboard',
- 'surfboard', 'tennis racket', 'bottle', 'wine glass', 'cup', 'fork',
- 'knife', 'spoon', 'bowl', 'banana', 'apple', 'sandwich', 'orange',
- 'broccoli', 'carrot', 'hot dog', 'pizza', 'donut', 'cake', 'chair',
- 'couch', 'potted plant', 'bed', 'dining table', 'toilet', 'tv',
- 'laptop', 'mouse', 'remote', 'keyboard', 'cell phone', 'microwave',
- 'oven', 'toaster', 'sink', 'refrigerator', 'book', 'clock', 'vase',
- 'scissors', 'teddy bear', 'hair drier', 'toothbrush'),
- 'stuff_classes':
- ('banner', 'blanket', 'bridge', 'cardboard', 'counter', 'curtain',
- 'door-stuff', 'floor-wood', 'flower', 'fruit', 'gravel', 'house',
- 'light', 'mirror-stuff', 'net', 'pillow', 'platform', 'playingfield',
- 'railroad', 'river', 'road', 'roof', 'sand', 'sea', 'shelf', 'snow',
- 'stairs', 'tent', 'towel', 'wall-brick', 'wall-stone', 'wall-tile',
- 'wall-wood', 'water-other', 'window-blind', 'window-other',
- 'tree-merged', 'fence-merged', 'ceiling-merged', 'sky-other-merged',
- 'cabinet-merged', 'table-merged', 'floor-other-merged',
- 'pavement-merged', 'mountain-merged', 'grass-merged', 'dirt-merged',
- 'paper-merged', 'food-other-merged', 'building-other-merged',
- 'rock-merged', 'wall-other-merged', 'rug-merged'),
- 'palette':
- [(220, 20, 60), (119, 11, 32), (0, 0, 142), (0, 0, 230), (106, 0, 228),
- (0, 60, 100), (0, 80, 100), (0, 0, 70), (0, 0, 192), (250, 170, 30),
- (100, 170, 30), (220, 220, 0), (175, 116, 175), (250, 0, 30),
- (165, 42, 42), (255, 77, 255), (0, 226, 252), (182, 182, 255),
- (0, 82, 0), (120, 166, 157), (110, 76, 0), (174, 57, 255),
- (199, 100, 0), (72, 0, 118), (255, 179, 240), (0, 125, 92),
- (209, 0, 151), (188, 208, 182), (0, 220, 176), (255, 99, 164),
- (92, 0, 73), (133, 129, 255), (78, 180, 255), (0, 228, 0),
- (174, 255, 243), (45, 89, 255), (134, 134, 103), (145, 148, 174),
- (255, 208, 186), (197, 226, 255), (171, 134, 1), (109, 63, 54),
- (207, 138, 255), (151, 0, 95), (9, 80, 61), (84, 105, 51),
- (74, 65, 105), (166, 196, 102), (208, 195, 210), (255, 109, 65),
- (0, 143, 149), (179, 0, 194), (209, 99, 106), (5, 121, 0),
- (227, 255, 205), (147, 186, 208), (153, 69, 1), (3, 95, 161),
- (163, 255, 0), (119, 0, 170), (0, 182, 199), (0, 165, 120),
- (183, 130, 88), (95, 32, 0), (130, 114, 135), (110, 129, 133),
- (166, 74, 118), (219, 142, 185), (79, 210, 114), (178, 90, 62),
- (65, 70, 15), (127, 167, 115), (59, 105, 106), (142, 108, 45),
- (196, 172, 0), (95, 54, 80), (128, 76, 255), (201, 57, 1),
- (246, 0, 122), (191, 162, 208), (255, 255, 128), (147, 211, 203),
- (150, 100, 100), (168, 171, 172), (146, 112, 198), (210, 170, 100),
- (92, 136, 89), (218, 88, 184), (241, 129, 0), (217, 17, 255),
- (124, 74, 181), (70, 70, 70), (255, 228, 255), (154, 208, 0),
- (193, 0, 92), (76, 91, 113), (255, 180, 195), (106, 154, 176),
- (230, 150, 140), (60, 143, 255), (128, 64, 128), (92, 82, 55),
- (254, 212, 124), (73, 77, 174), (255, 160, 98), (255, 255, 255),
- (104, 84, 109), (169, 164, 131), (225, 199, 255), (137, 54, 74),
- (135, 158, 223), (7, 246, 231), (107, 255, 200), (58, 41, 149),
- (183, 121, 142), (255, 73, 97), (107, 142, 35), (190, 153, 153),
- (146, 139, 141), (70, 130, 180), (134, 199, 156), (209, 226, 140),
- (96, 36, 108), (96, 96, 96), (64, 170, 64), (152, 251, 152),
- (208, 229, 228), (206, 186, 171), (152, 161, 64), (116, 112, 0),
- (0, 114, 143), (102, 102, 156), (250, 141, 255)]
- }
- COCOAPI = COCOPanoptic
- # ann_id is not unique in coco panoptic dataset.
- ANN_ID_UNIQUE = False
-
- def __init__(self,
- ann_file: str = '',
- metainfo: Optional[dict] = None,
- data_root: Optional[str] = None,
- data_prefix: dict = dict(img=None, ann=None, seg=None),
- filter_cfg: Optional[dict] = None,
- indices: Optional[Union[int, Sequence[int]]] = None,
- serialize_data: bool = True,
- pipeline: List[Union[dict, Callable]] = [],
- test_mode: bool = False,
- lazy_init: bool = False,
- max_refetch: int = 1000,
- backend_args: dict = None,
- **kwargs) -> None:
- super().__init__(
- ann_file=ann_file,
- metainfo=metainfo,
- data_root=data_root,
- data_prefix=data_prefix,
- filter_cfg=filter_cfg,
- indices=indices,
- serialize_data=serialize_data,
- pipeline=pipeline,
- test_mode=test_mode,
- lazy_init=lazy_init,
- max_refetch=max_refetch,
- backend_args=backend_args,
- **kwargs)
-
- def parse_data_info(self, raw_data_info: dict) -> dict:
- """Parse raw annotation to target format.
-
- Args:
- raw_data_info (dict): Raw data information load from ``ann_file``.
-
- Returns:
- dict: Parsed annotation.
- """
- img_info = raw_data_info['raw_img_info']
- ann_info = raw_data_info['raw_ann_info']
- # filter out unmatched annotations which have
- # same segment_id but belong to other image
- ann_info = [
- ann for ann in ann_info if ann['image_id'] == img_info['img_id']
- ]
- data_info = {}
-
- img_path = osp.join(self.data_prefix['img'], img_info['file_name'])
- if self.data_prefix.get('seg', None):
- seg_map_path = osp.join(
- self.data_prefix['seg'],
- img_info['file_name'].replace('jpg', 'png'))
- else:
- seg_map_path = None
- data_info['img_path'] = img_path
- data_info['img_id'] = img_info['img_id']
- data_info['seg_map_path'] = seg_map_path
- data_info['height'] = img_info['height']
- data_info['width'] = img_info['width']
-
- instances = []
- segments_info = []
- for ann in ann_info:
- instance = {}
- x1, y1, w, h = ann['bbox']
- if ann['area'] <= 0 or w < 1 or h < 1:
- continue
- bbox = [x1, y1, x1 + w, y1 + h]
- category_id = ann['category_id']
- contiguous_cat_id = self.cat2label[category_id]
-
- is_thing = self.coco.load_cats(ids=category_id)[0]['isthing']
- if is_thing:
- is_crowd = ann.get('iscrowd', False)
- instance['bbox'] = bbox
- instance['bbox_label'] = contiguous_cat_id
- if not is_crowd:
- instance['ignore_flag'] = 0
- else:
- instance['ignore_flag'] = 1
- is_thing = False
-
- segment_info = {
- 'id': ann['id'],
- 'category': contiguous_cat_id,
- 'is_thing': is_thing
- }
- segments_info.append(segment_info)
- if len(instance) > 0 and is_thing:
- instances.append(instance)
- data_info['instances'] = instances
- data_info['segments_info'] = segments_info
- return data_info
-
- def filter_data(self) -> List[dict]:
- """Filter images too small or without ground truth.
-
- Returns:
- List[dict]: ``self.data_list`` after filtering.
- """
- if self.test_mode:
- return self.data_list
-
- if self.filter_cfg is None:
- return self.data_list
-
- filter_empty_gt = self.filter_cfg.get('filter_empty_gt', False)
- min_size = self.filter_cfg.get('min_size', 0)
-
- ids_with_ann = set()
- # check whether images have legal thing annotations.
- for data_info in self.data_list:
- for segment_info in data_info['segments_info']:
- if not segment_info['is_thing']:
- continue
- ids_with_ann.add(data_info['img_id'])
-
- valid_data_list = []
- for data_info in self.data_list:
- img_id = data_info['img_id']
- width = data_info['width']
- height = data_info['height']
- if filter_empty_gt and img_id not in ids_with_ann:
- continue
- if min(width, height) >= min_size:
- valid_data_list.append(data_info)
-
- return valid_data_list
diff --git a/spaces/KyanChen/RSPrompter/mmdet/models/losses/__init__.py b/spaces/KyanChen/RSPrompter/mmdet/models/losses/__init__.py
deleted file mode 100644
index f008f8a7f660e630d11b5cc4084936e5d809c3fb..0000000000000000000000000000000000000000
--- a/spaces/KyanChen/RSPrompter/mmdet/models/losses/__init__.py
+++ /dev/null
@@ -1,33 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-from .accuracy import Accuracy, accuracy
-from .ae_loss import AssociativeEmbeddingLoss
-from .balanced_l1_loss import BalancedL1Loss, balanced_l1_loss
-from .cross_entropy_loss import (CrossEntropyLoss, binary_cross_entropy,
- cross_entropy, mask_cross_entropy)
-from .dice_loss import DiceLoss
-from .focal_loss import FocalLoss, sigmoid_focal_loss
-from .gaussian_focal_loss import GaussianFocalLoss
-from .gfocal_loss import DistributionFocalLoss, QualityFocalLoss
-from .ghm_loss import GHMC, GHMR
-from .iou_loss import (BoundedIoULoss, CIoULoss, DIoULoss, EIoULoss, GIoULoss,
- IoULoss, bounded_iou_loss, iou_loss)
-from .kd_loss import KnowledgeDistillationKLDivLoss
-from .mse_loss import MSELoss, mse_loss
-from .pisa_loss import carl_loss, isr_p
-from .seesaw_loss import SeesawLoss
-from .smooth_l1_loss import L1Loss, SmoothL1Loss, l1_loss, smooth_l1_loss
-from .utils import reduce_loss, weight_reduce_loss, weighted_loss
-from .varifocal_loss import VarifocalLoss
-
-__all__ = [
- 'accuracy', 'Accuracy', 'cross_entropy', 'binary_cross_entropy',
- 'mask_cross_entropy', 'CrossEntropyLoss', 'sigmoid_focal_loss',
- 'FocalLoss', 'smooth_l1_loss', 'SmoothL1Loss', 'balanced_l1_loss',
- 'BalancedL1Loss', 'mse_loss', 'MSELoss', 'iou_loss', 'bounded_iou_loss',
- 'IoULoss', 'BoundedIoULoss', 'GIoULoss', 'DIoULoss', 'CIoULoss',
- 'EIoULoss', 'GHMC', 'GHMR', 'reduce_loss', 'weight_reduce_loss',
- 'weighted_loss', 'L1Loss', 'l1_loss', 'isr_p', 'carl_loss',
- 'AssociativeEmbeddingLoss', 'GaussianFocalLoss', 'QualityFocalLoss',
- 'DistributionFocalLoss', 'VarifocalLoss', 'KnowledgeDistillationKLDivLoss',
- 'SeesawLoss', 'DiceLoss'
-]
diff --git a/spaces/KyanChen/RSPrompter/mmpretrain/datasets/dtd.py b/spaces/KyanChen/RSPrompter/mmpretrain/datasets/dtd.py
deleted file mode 100644
index 034d0b1b444afebfc420eeff7e138072f7d7ee1f..0000000000000000000000000000000000000000
--- a/spaces/KyanChen/RSPrompter/mmpretrain/datasets/dtd.py
+++ /dev/null
@@ -1,116 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-from typing import List
-
-import mat4py
-from mmengine import get_file_backend
-
-from mmpretrain.registry import DATASETS
-from .base_dataset import BaseDataset
-from .categories import DTD_CATEGORIES
-
-
-@DATASETS.register_module()
-class DTD(BaseDataset):
- """The Describable Texture Dataset (DTD).
-
- Support the `Describable Texture Dataset `_ Dataset.
- After downloading and decompression, the dataset directory structure is as follows.
-
- DTD dataset directory: ::
-
- dtd
- ├── images
- │ ├── banded
- | | ├──banded_0002.jpg
- | | ├──banded_0004.jpg
- | | └── ...
- │ └── ...
- ├── imdb
- │ └── imdb.mat
- ├── labels
- | | ├──labels_joint_anno.txt
- | | ├──test1.txt
- | | ├──test2.txt
- | | └── ...
- │ └── ...
- └── ....
-
- Args:
- data_root (str): The root directory for Describable Texture dataset.
- split (str, optional): The dataset split, supports "train",
- "val", "trainval", and "test". Default to "trainval".
-
- Examples:
- >>> from mmpretrain.datasets import DTD
- >>> train_dataset = DTD(data_root='data/dtd', split='trainval')
- >>> train_dataset
- Dataset DTD
- Number of samples: 3760
- Number of categories: 47
- Root of dataset: data/dtd
- >>> test_dataset = DTD(data_root='data/dtd', split='test')
- >>> test_dataset
- Dataset DTD
- Number of samples: 1880
- Number of categories: 47
- Root of dataset: data/dtd
- """ # noqa: E501
-
- METAINFO = {'classes': DTD_CATEGORIES}
-
- def __init__(self, data_root: str, split: str = 'trainval', **kwargs):
-
- splits = ['train', 'val', 'trainval', 'test']
- assert split in splits, \
- f"The split must be one of {splits}, but get '{split}'"
- self.split = split
-
- data_prefix = 'images'
- test_mode = split == 'test'
-
- self.backend = get_file_backend(data_root, enable_singleton=True)
- ann_file = self.backend.join_path('imdb', 'imdb.mat')
-
- super(DTD, self).__init__(
- ann_file=ann_file,
- data_root=data_root,
- data_prefix=data_prefix,
- test_mode=test_mode,
- **kwargs)
-
- def load_data_list(self):
- """Load images and ground truth labels."""
-
- data = mat4py.loadmat(self.ann_file)['images']
- names = data['name']
- labels = data['class']
- parts = data['set']
- num = len(names)
- assert num == len(labels) == len(parts), 'get error ann file'
-
- if self.split == 'train':
- target_set = {1}
- elif self.split == 'val':
- target_set = {2}
- elif self.split == 'test':
- target_set = {3}
- else:
- target_set = {1, 2}
-
- data_list = []
- for i in range(num):
- if parts[i] in target_set:
- img_name = names[i]
- img_path = self.backend.join_path(self.img_prefix, img_name)
- gt_label = labels[i] - 1
- info = dict(img_path=img_path, gt_label=gt_label)
- data_list.append(info)
-
- return data_list
-
- def extra_repr(self) -> List[str]:
- """The extra repr information of the dataset."""
- body = [
- f'Root of dataset: \t{self.data_root}',
- ]
- return body
diff --git a/spaces/LanguageBind/LanguageBind/vl_ret/dataloader_lsmdc_retrieval.py b/spaces/LanguageBind/LanguageBind/vl_ret/dataloader_lsmdc_retrieval.py
deleted file mode 100644
index 302569eecdf56a57e428366e90a062989283e9a1..0000000000000000000000000000000000000000
--- a/spaces/LanguageBind/LanguageBind/vl_ret/dataloader_lsmdc_retrieval.py
+++ /dev/null
@@ -1,208 +0,0 @@
-from __future__ import absolute_import
-from __future__ import division
-from __future__ import unicode_literals
-from __future__ import print_function
-
-import os
-from torch.utils.data import Dataset
-import numpy as np
-import json
-import math
-from .rawvideo_util import RawVideoExtractor
-
-class LSMDC_DataLoader(Dataset):
- """LSMDC dataset loader."""
- def __init__(
- self,
- subset,
- data_path,
- features_path,
- tokenizer,
- max_words=30,
- feature_framerate=1.0,
- max_frames=100,
- image_resolution=224,
- frame_order=0,
- slice_framepos=0,
- ):
- self.data_path = data_path
- self.features_path = features_path
- self.feature_framerate = feature_framerate
- self.max_words = max_words
- self.max_frames = max_frames
- self.tokenizer = tokenizer
- # 0: ordinary order; 1: reverse order; 2: random order.
- self.frame_order = frame_order
- assert self.frame_order in [0, 1, 2]
- # 0: cut from head frames; 1: cut from tail frames; 2: extract frames uniformly.
- self.slice_framepos = slice_framepos
- assert self.slice_framepos in [0, 1, 2]
-
- self.subset = subset
- assert self.subset in ["train", "val", "test"]
-
- video_json_path_dict = {}
- video_json_path_dict["train"] = os.path.join(self.data_path, "LSMDC16_annos_training.csv")
- video_json_path_dict["val"] = os.path.join(self.data_path, "LSMDC16_annos_val.csv")
- video_json_path_dict["test"] = os.path.join(self.data_path, "LSMDC16_challenge_1000_publictect.csv")
-
- # \t\t\t\t\t
- # is not a unique identifier, i.e. the same can be associated with multiple sentences.
- # However, LSMDC16_challenge_1000_publictect.csv has no repeat instances
- video_id_list = []
- caption_dict = {}
- with open(video_json_path_dict[self.subset], 'r') as fp:
- for line in fp:
- line = line.strip()
- line_split = line.split("\t")
- assert len(line_split) == 6
- clip_id, start_aligned, end_aligned, start_extracted, end_extracted, sentence = line_split
- caption_dict[len(caption_dict)] = (clip_id, sentence)
- if clip_id not in video_id_list: video_id_list.append(clip_id)
-
- video_dict = {}
- for root, dub_dir, video_files in os.walk(self.features_path):
- for video_file in video_files:
- video_id_ = ".".join(video_file.split(".")[:-1])
- if video_id_ not in video_id_list:
- continue
- file_path_ = os.path.join(root, video_file)
- video_dict[video_id_] = file_path_
-
- self.video_dict = video_dict
-
- # Get all captions
- self.iter2video_pairs_dict = {}
- for clip_id, sentence in caption_dict.values():
- if clip_id not in self.video_dict:
- continue
- self.iter2video_pairs_dict[len(self.iter2video_pairs_dict)] = (clip_id, sentence)
-
- self.rawVideoExtractor = RawVideoExtractor(framerate=feature_framerate, size=image_resolution)
- self.SPECIAL_TOKEN = {"CLS_TOKEN": "<|startoftext|>", "SEP_TOKEN": "<|endoftext|>",
- "MASK_TOKEN": "[MASK]", "UNK_TOKEN": "[UNK]", "PAD_TOKEN": "[PAD]"}
-
- def __len__(self):
- return len(self.iter2video_pairs_dict)
-
- def _get_video_id_from_pseduo(self, pseudo_video_id):
- video_id = pseudo_video_id[2:]
- return video_id
-
- def _get_video_id_single(self, path):
- pseudo_video_id_list = []
- video_id_list = []
- print('Loading json: {}'.format(path))
- with open(path, 'r') as f:
- json_data = json.load(f)
-
- for pseudo_video_id in json_data:
- if pseudo_video_id in pseudo_video_id_list:
- print("reduplicate.")
- else:
- video_id = self._get_video_id_from_pseduo(pseudo_video_id)
- pseudo_video_id_list.append(pseudo_video_id)
- video_id_list.append(video_id)
- return pseudo_video_id_list, video_id_list
-
- def _get_captions_single(self, path):
- pseudo_caption_dict = {}
- with open(path, 'r') as f:
- json_data = json.load(f)
-
- for pseudo_video_id, v_ in json_data.items():
- pseudo_caption_dict[pseudo_video_id] = {}
- timestamps = v_["timestamps"]
- pseudo_caption_dict[pseudo_video_id]["start"] = \
- np.array([int(math.floor(float(itm[0]))) for itm in timestamps], dtype=object)
- pseudo_caption_dict[pseudo_video_id]["end"] = \
- np.array([int(math.ceil(float(itm[1]))) for itm in timestamps], dtype=object)
- pseudo_caption_dict[pseudo_video_id]["text"] = np.array(v_["sentences"], dtype=object)
- return pseudo_caption_dict
-
- def _get_text(self, video_id, caption):
- k = 1
- choice_video_ids = [video_id]
- pairs_text = np.zeros((k, self.max_words), dtype=np.long)
- pairs_mask = np.zeros((k, self.max_words), dtype=np.long)
- pairs_segment = np.zeros((k, self.max_words), dtype=np.long)
-
- for i, video_id in enumerate(choice_video_ids):
- words = self.tokenizer.tokenize(caption)
-
- words = [self.SPECIAL_TOKEN["CLS_TOKEN"]] + words
- total_length_with_CLS = self.max_words - 1
- if len(words) > total_length_with_CLS:
- words = words[:total_length_with_CLS]
- words = words + [self.SPECIAL_TOKEN["SEP_TOKEN"]]
-
- input_ids = self.tokenizer.convert_tokens_to_ids(words)
- input_mask = [1] * len(input_ids)
- segment_ids = [0] * len(input_ids)
- while len(input_ids) < self.max_words:
- input_ids.append(0)
- input_mask.append(0)
- segment_ids.append(0)
- assert len(input_ids) == self.max_words
- assert len(input_mask) == self.max_words
- assert len(segment_ids) == self.max_words
-
- pairs_text[i] = np.array(input_ids)
- pairs_mask[i] = np.array(input_mask)
- pairs_segment[i] = np.array(segment_ids)
-
- return pairs_text, pairs_mask, pairs_segment, choice_video_ids
-
- def _get_rawvideo(self, choice_video_ids):
- video_mask = np.zeros((len(choice_video_ids), self.max_frames), dtype=np.long)
- max_video_length = [0] * len(choice_video_ids)
-
- # Pair x L x T x 3 x H x W
- video = np.zeros((len(choice_video_ids), self.max_frames, 1, 3,
- self.rawVideoExtractor.size, self.rawVideoExtractor.size), dtype=np.float)
-
- try:
- for i, video_id in enumerate(choice_video_ids):
- video_path = self.video_dict[video_id]
-
- raw_video_data = self.rawVideoExtractor.get_video_data(video_path)
- raw_video_data = raw_video_data['video']
-
- if len(raw_video_data.shape) > 3:
- raw_video_data_clip = raw_video_data
- # L x T x 3 x H x W
- raw_video_slice = self.rawVideoExtractor.process_raw_data(raw_video_data_clip)
- if self.max_frames < raw_video_slice.shape[0]:
- if self.slice_framepos == 0:
- video_slice = raw_video_slice[:self.max_frames, ...]
- elif self.slice_framepos == 1:
- video_slice = raw_video_slice[-self.max_frames:, ...]
- else:
- sample_indx = np.linspace(0, raw_video_slice.shape[0]-1, num=self.max_frames, dtype=int)
- video_slice = raw_video_slice[sample_indx, ...]
- else:
- video_slice = raw_video_slice
-
- video_slice = self.rawVideoExtractor.process_frame_order(video_slice, frame_order=self.frame_order)
-
- slice_len = video_slice.shape[0]
- max_video_length[i] = max_video_length[i] if max_video_length[i] > slice_len else slice_len
- if slice_len < 1:
- pass
- else:
- video[i][:slice_len, ...] = video_slice
- else:
- print("video path: {} error. video id: {}".format(video_path, video_id))
- except Exception as excep:
- print("Video ids: {}".format(choice_video_ids))
- raise excep
-
- for i, v_length in enumerate(max_video_length):
- video_mask[i][:v_length] = [1] * v_length
- return video, video_mask
-
- def __getitem__(self, feature_idx):
- clip_id, sentence = self.iter2video_pairs_dict[feature_idx]
- pairs_text, pairs_mask, pairs_segment, choice_video_ids = self._get_text(clip_id, sentence)
- video, video_mask = self._get_rawvideo(choice_video_ids)
- return pairs_text, pairs_mask, pairs_segment, video, video_mask
\ No newline at end of file
diff --git a/spaces/Linly-AI/Linly-ChatFlow/models/llama.py b/spaces/Linly-AI/Linly-ChatFlow/models/llama.py
deleted file mode 100644
index 6f16e338ee613031abb7939815e9ba548c8755d5..0000000000000000000000000000000000000000
--- a/spaces/Linly-AI/Linly-ChatFlow/models/llama.py
+++ /dev/null
@@ -1,197 +0,0 @@
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-from models.norm import RMSNorm
-from models.rope import precompute_freqs_cis, apply_rotary_emb
-import bitsandbytes as bnb
-import math
-
-
-class NormalLinear(nn.Linear):
- def reset_parameters(self) -> None:
- pass
-
-
-class BnbInt8Linear(bnb.nn.Linear8bitLt):
- def __init__(self, *args, **kwargs):
- super().__init__(has_fp16_weights=False, threshold=6.0, *args, **kwargs)
-
- def reset_parameters(self) -> None:
- pass
-
-
-def get_linear_layer(use_int8):
- if use_int8:
- return BnbInt8Linear
- return NormalLinear
-
-
-class WordEmbedding(nn.Module):
- def __init__(self, args):
- super(WordEmbedding, self).__init__()
- self.embedding = nn.Embedding(args.vocab_size, args.emb_size)
-
- def forward(self, src):
- emb = self.embedding(src)
- return emb
-
-
-class MultiHeadedAttention(nn.Module):
- def __init__(self, args, hidden_size, heads_num, attention_head_size, has_bias=True, use_int8=True):
- super(MultiHeadedAttention, self).__init__()
- self.heads_num = heads_num
-
- self.per_head_size = attention_head_size
- self.inner_hidden_size = heads_num * attention_head_size
-
- Linear = get_linear_layer(use_int8)
- self.linear_layers = nn.ModuleList(
- [Linear(hidden_size, self.inner_hidden_size, bias=has_bias) for _ in range(3)]
- )
-
- self.final_linear = Linear(self.inner_hidden_size, hidden_size, bias=has_bias)
-
- # add cache to reduce compute source.
- self.cache_k = torch.zeros(
- (args.batch_size, args.seq_length, self.heads_num, self.per_head_size)
- )
- self.cache_v = torch.zeros(
- (args.batch_size, args.seq_length, self.heads_num, self.per_head_size)
- )
-
- def forward(self, key, value, query, start_pos, continue_exsample, mask, freqs_cis):
- batch_size, seq_length, _ = query.size()
- heads_num = self.heads_num
- per_head_size = self.per_head_size
- query, key, value = [l(x).view(batch_size, -1, heads_num, per_head_size) \
- for l, x in zip(self.linear_layers, (query, key, value))]
- query, key = apply_rotary_emb(query, key, freqs_cis=freqs_cis)
- if self.cache_k.device != key.device:
- self.cache_k = self.cache_k.to(key)
- if self.cache_v.device != value.device:
- self.cache_v = self.cache_v.to(value)
-
- self.cache_k[continue_exsample, start_pos: start_pos + seq_length] = key
- self.cache_v[continue_exsample, start_pos: start_pos + seq_length] = value
-
- key = self.cache_k[continue_exsample, : start_pos + seq_length]
- value = self.cache_v[continue_exsample, : start_pos + seq_length]
-
- query, key, value = [x.transpose(1, 2) for x in (query, key, value)]
-
- scores = torch.matmul(query, key.transpose(-2, -1))
- scores = scores / math.sqrt(float(per_head_size))
- if mask is not None:
- scores += mask
- # probs = nn.Softmax(dim=-1)(scores)
- probs = F.softmax(scores.float(), dim=-1).type_as(query)
- output = torch.matmul(probs, value).transpose(1, 2).\
- contiguous().view(batch_size, seq_length, -1)
- return self.final_linear(output)
-
-
-class GatedFeedForward(nn.Module):
- def __init__(self, hidden_size, feedforward_size, has_bias=True, use_int8=True):
- super(GatedFeedForward, self).__init__()
- Linear = get_linear_layer(use_int8)
- self.linear_gate = Linear(hidden_size, feedforward_size, bias=has_bias)
- self.linear_1 = Linear(hidden_size, feedforward_size, bias=has_bias)
- self.linear_2 = Linear(feedforward_size, hidden_size, bias=has_bias)
- self.act = F.silu
-
- def forward(self, x):
- # gate = self.act(self.linear_gate(x))
- gate = self.act(self.linear_gate(x)).type_as(x)
- inter_linear = self.linear_1(x)
- inter = gate * inter_linear
- output = self.linear_2(inter)
- return output
-
-
-class TransformerLayer(nn.Module):
- def __init__(self, args):
- super(TransformerLayer, self).__init__()
-
- if hasattr(args, "attention_head_size"):
- attention_head_size = args.attention_head_size
- else:
- attention_head_size = args.hidden_size // args.heads_num
-
- has_bias = bool(1 - args.remove_transformer_bias)
- # Multi-head Attention
- self.self_attn = MultiHeadedAttention(
- args, args.hidden_size, args.heads_num, attention_head_size, has_bias=has_bias,
- use_int8=args.use_int8
- )
-
- # FFN
- self.feed_forward = GatedFeedForward(
- args.hidden_size, args.feedforward_size, has_bias, use_int8=args.use_int8
- )
-
- self.layer_norm_1 = RMSNorm(args.hidden_size)
- self.layer_norm_2 = RMSNorm(args.hidden_size)
-
- def forward(self, hidden, start_pos, continue_exsample, mask, freqs_cis=None):
- inter = self.layer_norm_1(hidden)
- inter = self.self_attn(inter, inter, inter, start_pos, continue_exsample, mask, freqs_cis)
- hidden = hidden + inter
- output = self.layer_norm_2(hidden)
- output = self.feed_forward(output) + hidden
- return output
-
-
-class TransformerEncoder(nn.Module):
- def __init__(self, args):
- super(TransformerEncoder, self).__init__()
- self.mask = args.mask
- self.layers_num = args.layers_num
-
- self.transformer = nn.ModuleList(
- [TransformerLayer(args) for _ in range(self.layers_num)]
- )
-
- self.layer_norm = RMSNorm(args.hidden_size)
- self.freqs_cis = precompute_freqs_cis(args.hidden_size // args.heads_num, args.max_seq_length * 2)
-
- def forward(self, emb, start_pos, continue_exsample):
- batch_size, seq_length, _ = emb.size()
- mask = None
- if seq_length > 1:
- mask = torch.ones(seq_length, seq_length, device=emb.device)
- mask = torch.tril(mask)
- mask = (1.0 - mask) * -10000
- mask = mask.repeat(batch_size, 1, 1, 1)
-
- hidden = emb
- freqs_cis = self.freqs_cis[start_pos: start_pos + seq_length].to(hidden.device)
-
- for i in range(self.layers_num):
- hidden = self.transformer[i](hidden, start_pos, continue_exsample, mask, freqs_cis=freqs_cis)
- return self.layer_norm(hidden)
-
-
-class LmOutput(nn.Module):
- def __init__(self, args):
- super(LmOutput, self).__init__()
- # update: lm output not use int8
- Linear = get_linear_layer(False)
- self.lm = Linear(args.hidden_size, args.vocab_size, bias=False)
-
- def forward(self, x):
- return self.lm(x[:, -1, :])
-
-
-class LLaMa(nn.Module):
- def __init__(self, args):
- super(LLaMa, self).__init__()
- self.embedding = WordEmbedding(args)
- self.encoder = TransformerEncoder(args)
- self.target = LmOutput(args)
-
- #@torch.inference_mode()
- def forward(self, src, start_pos, continue_exsample):
- emb = self.embedding(src)
- output = self.encoder(emb, start_pos, continue_exsample)
- output = self.target(output)
- return output
diff --git a/spaces/MCkernick/Image_Restoration_Colorization/Global/train_domain_A.py b/spaces/MCkernick/Image_Restoration_Colorization/Global/train_domain_A.py
deleted file mode 100644
index 45004938349d674227b2fac3ad9644370c9eda30..0000000000000000000000000000000000000000
--- a/spaces/MCkernick/Image_Restoration_Colorization/Global/train_domain_A.py
+++ /dev/null
@@ -1,147 +0,0 @@
-# Copyright (c) Microsoft Corporation.
-# Licensed under the MIT License.
-
-import time
-from collections import OrderedDict
-from options.train_options import TrainOptions
-from data.data_loader import CreateDataLoader
-from models.models import create_da_model
-import util.util as util
-from util.visualizer import Visualizer
-import os
-import numpy as np
-import torch
-import torchvision.utils as vutils
-from torch.autograd import Variable
-
-opt = TrainOptions().parse()
-
-if opt.debug:
- opt.display_freq = 1
- opt.print_freq = 1
- opt.niter = 1
- opt.niter_decay = 0
- opt.max_dataset_size = 10
-
-data_loader = CreateDataLoader(opt)
-dataset = data_loader.load_data()
-dataset_size = len(dataset) * opt.batchSize
-print('#training images = %d' % dataset_size)
-
-path = os.path.join(opt.checkpoints_dir, opt.name, 'model.txt')
-visualizer = Visualizer(opt)
-
-iter_path = os.path.join(opt.checkpoints_dir, opt.name, 'iter.txt')
-if opt.continue_train:
- try:
- start_epoch, epoch_iter = np.loadtxt(iter_path, delimiter=',', dtype=int)
- except:
- start_epoch, epoch_iter = 1, 0
- visualizer.print_save('Resuming from epoch %d at iteration %d' % (start_epoch - 1, epoch_iter))
-else:
- start_epoch, epoch_iter = 1, 0
-
-# opt.which_epoch=start_epoch-1
-model = create_da_model(opt)
-fd = open(path, 'w')
-fd.write(str(model.module.netG))
-fd.write(str(model.module.netD))
-fd.close()
-
-total_steps = (start_epoch - 1) * dataset_size + epoch_iter
-
-display_delta = total_steps % opt.display_freq
-print_delta = total_steps % opt.print_freq
-save_delta = total_steps % opt.save_latest_freq
-
-for epoch in range(start_epoch, opt.niter + opt.niter_decay + 1):
- epoch_start_time = time.time()
- if epoch != start_epoch:
- epoch_iter = epoch_iter % dataset_size
- for i, data in enumerate(dataset, start=epoch_iter):
- iter_start_time = time.time()
- total_steps += opt.batchSize
- epoch_iter += opt.batchSize
-
- # whether to collect output images
- save_fake = total_steps % opt.display_freq == display_delta
-
- ############## Forward Pass ######################
- losses, generated = model(Variable(data['label']), Variable(data['inst']),
- Variable(data['image']), Variable(data['feat']), infer=save_fake)
-
- # sum per device losses
- losses = [torch.mean(x) if not isinstance(x, int) else x for x in losses]
- loss_dict = dict(zip(model.module.loss_names, losses))
-
- # calculate final loss scalar
- loss_D = (loss_dict['D_fake'] + loss_dict['D_real']) * 0.5
- loss_featD=(loss_dict['featD_fake'] + loss_dict['featD_real']) * 0.5
- loss_G = loss_dict['G_GAN'] + loss_dict.get('G_GAN_Feat', 0) + loss_dict.get('G_VGG', 0) + loss_dict['G_KL'] + loss_dict['G_featD']
-
- ############### Backward Pass ####################
- # update generator weights
- model.module.optimizer_G.zero_grad()
- loss_G.backward()
- model.module.optimizer_G.step()
-
- # update discriminator weights
- model.module.optimizer_D.zero_grad()
- loss_D.backward()
- model.module.optimizer_D.step()
-
- model.module.optimizer_featD.zero_grad()
- loss_featD.backward()
- model.module.optimizer_featD.step()
-
- # call(["nvidia-smi", "--format=csv", "--query-gpu=memory.used,memory.free"])
-
- ############## Display results and errors ##########
- ### print out errors
- if total_steps % opt.print_freq == print_delta:
- errors = {k: v.data if not isinstance(v, int) else v for k, v in loss_dict.items()}
- t = (time.time() - iter_start_time) / opt.batchSize
- visualizer.print_current_errors(epoch, epoch_iter, errors, t, model.module.old_lr)
- visualizer.plot_current_errors(errors, total_steps)
-
- ### display output images
- if save_fake:
-
- if not os.path.exists(opt.outputs_dir + opt.name):
- os.makedirs(opt.outputs_dir + opt.name)
- imgs_num = data['label'].shape[0]
- imgs = torch.cat((data['label'], generated.data.cpu(), data['image']), 0)
-
- imgs = (imgs + 1.) / 2.0
-
- try:
- image_grid = vutils.save_image(imgs, opt.outputs_dir + opt.name + '/' + str(epoch) + '_' + str(
- total_steps) + '.png',
- nrow=imgs_num, padding=0, normalize=True)
- except OSError as err:
- print(err)
-
-
- if epoch_iter >= dataset_size:
- break
-
- # end of epoch
- iter_end_time = time.time()
- print('End of epoch %d / %d \t Time Taken: %d sec' %
- (epoch, opt.niter + opt.niter_decay, time.time() - epoch_start_time))
-
- ### save model for this epoch
- if epoch % opt.save_epoch_freq == 0:
- print('saving the model at the end of epoch %d, iters %d' % (epoch, total_steps))
- model.module.save('latest')
- model.module.save(epoch)
- np.savetxt(iter_path, (epoch + 1, 0), delimiter=',', fmt='%d')
-
- ### instead of only training the local enhancer, train the entire network after certain iterations
- if (opt.niter_fix_global != 0) and (epoch == opt.niter_fix_global):
- model.module.update_fixed_params()
-
- ### linearly decay learning rate after certain iterations
- if epoch > opt.niter:
- model.module.update_learning_rate()
-
diff --git a/spaces/Manjushri/PhotoReal-V3.6/app.py b/spaces/Manjushri/PhotoReal-V3.6/app.py
deleted file mode 100644
index 1cc5db92e6220b5b791239b4fc754d3b131600fe..0000000000000000000000000000000000000000
--- a/spaces/Manjushri/PhotoReal-V3.6/app.py
+++ /dev/null
@@ -1,40 +0,0 @@
-import gradio as gr
-import torch
-import numpy as np
-import modin.pandas as pd
-from PIL import Image
-from diffusers import DiffusionPipeline, StableDiffusionLatentUpscalePipeline
-
-device = 'cuda' #if torch.cuda.is_available() else 'cpu'
-
-pipe = DiffusionPipeline.from_pretrained("circulus/canvers-realistic-v3.6", torch_dtype=torch.float16, safety_checker=None)
-pipe = pipe.to(device)
-pipe.enable_xformers_memory_efficient_attention()
-refiner = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-xl-refiner-1.0", use_safetensors=True, torch_dtype=torch.float16, variant="fp16")
-refiner.enable_xformers_memory_efficient_attention()
-refiner = refiner.to(device)
-
-def genie (Prompt, negative_prompt, height, width, scale, steps, seed, upscale, high_noise_frac):
- generator = np.random.seed(0) if seed == 0 else torch.manual_seed(seed)
- if upscale == "Yes":
- #n_steps = 30
- int_image = pipe(Prompt, negative_prompt=negative_prompt, height=height, width=width, num_inference_steps=steps, guidance_scale=scale).images
- image = refiner(Prompt, negative_prompt=negative_prompt, image=int_image, denoising_start=high_noise_frac).images[0]
- return image
- else:
- image = pipe(Prompt, negative_prompt=negative_prompt, height=height, width=width, num_inference_steps=steps, guidance_scale=scale).images[0]
- return image
-
-gr.Interface(fn=genie, inputs=[gr.Textbox(label='What you want the AI to generate. 77 Token Limit.'),
- gr.Textbox(label='What you Do Not want the AI to generate. 77 Token Limit'),
- gr.Slider(512, 1024, 768, step=128, label='Height'),
- gr.Slider(512, 1024, 768, step=128, label='Width'),
- gr.Slider(1, maximum=15, value=7, step=.25, label='Guidance Scale'),
- gr.Slider(25, maximum=100, value=50, step=25, label='Number of Iterations'),
- gr.Slider(minimum=0, step=1, maximum=9999999999999999, randomize=True, label='Seed: 0 is Random'),
- gr.Radio(["Yes", "No"], label='SDXL 1.0 Refiner: Use if the Image has too much Noise', value='No'),
- gr.Slider(minimum=.9, maximum=.99, value=.95, step=.01, label='Refiner Denoise Start %')],
- outputs=gr.Image(label='Generated Image'),
- title="PhotoReal V3.6 with SDXL 1.0 Refiner - GPU",
- description=" Warning: This Demo is capable of producing NSFW content.",
- article = "If You Enjoyed this Demo and would like to Donate, you can send to any of these Wallets. BTC: bc1qzdm9j73mj8ucwwtsjx4x4ylyfvr6kp7svzjn84 3LWRoKYx6bCLnUrKEdnPo3FCSPQUSFDjFP DOGE: DK6LRc4gfefdCTRk9xPD239N31jh9GjKez SHIB (BEP20): 0xbE8f2f3B71DFEB84E5F7E3aae1909d60658aB891 PayPal: https://www.paypal.me/ManjushriBodhisattva ETH: 0xbE8f2f3B71DFEB84E5F7E3aae1909d60658aB891 Code Monkey: Manjushri ").launch(debug=True, max_threads=80)
\ No newline at end of file
diff --git a/spaces/Masa-digital-art/planning-proposal-gpt-4/constraints.md b/spaces/Masa-digital-art/planning-proposal-gpt-4/constraints.md
deleted file mode 100644
index 875c7acdf4038c47c1570ecf96438731411a344b..0000000000000000000000000000000000000000
--- a/spaces/Masa-digital-art/planning-proposal-gpt-4/constraints.md
+++ /dev/null
@@ -1,36 +0,0 @@
-# Constraints
-
-- Abstractly interpret and brainstorm what the user has entered, and propose ideas for story planning
-- Projects must be attractive and have a high potential for commercial success, including surprising novelty
-- You suggest the following items
-- Your reply will be generated in Japanese according to the template below
-
-## items to generate
-
-### The setting of the story
-
-- Propose a story stage according to the content entered by the user
-
-### Character situation
-
-- Suggests the difficult situation that the character is in according to the content entered by the user
-
-### Situation where the main character breaks through
-
-- Suggest a situation that the main character can break through and solve according to the content entered by the user
-
-### A development that exceeds the expectations of the audience
-
-- Propose surprising developments in the story that defy the audience's expectations according to the user's input
-
-### Story Synopsis
-
-- Suggest a synopsis that includes the stage of the story, the situation in which the characters are placed, the situation where the main character breaks through, and the development that exceeds the audience's expectations.
-- Make it an attractive sentence that gives a strong impression to those who see the synopsis.
-
-### Key concept of planning
-
-- Catchphrase that clearly expresses the story
-- Make sure you know who, where and what the story is about
-
-# template
\ No newline at end of file
diff --git a/spaces/Mountchicken/MAERec-Gradio/configs/textrecog/nrtr/README.md b/spaces/Mountchicken/MAERec-Gradio/configs/textrecog/nrtr/README.md
deleted file mode 100644
index b57cc7f8d952c22a41966a586279565858ccf761..0000000000000000000000000000000000000000
--- a/spaces/Mountchicken/MAERec-Gradio/configs/textrecog/nrtr/README.md
+++ /dev/null
@@ -1,58 +0,0 @@
-# NRTR
-
-> [NRTR: A No-Recurrence Sequence-to-Sequence Model For Scene Text Recognition](https://arxiv.org/abs/1806.00926)
-
-
-
-## Abstract
-
-Scene text recognition has attracted a great many researches due to its importance to various applications. Existing methods mainly adopt recurrence or convolution based networks. Though have obtained good performance, these methods still suffer from two limitations: slow training speed due to the internal recurrence of RNNs, and high complexity due to stacked convolutional layers for long-term feature extraction. This paper, for the first time, proposes a no-recurrence sequence-to-sequence text recognizer, named NRTR, that dispenses with recurrences and convolutions entirely. NRTR follows the encoder-decoder paradigm, where the encoder uses stacked self-attention to extract image features, and the decoder applies stacked self-attention to recognize texts based on encoder output. NRTR relies solely on self-attention mechanism thus could be trained with more parallelization and less complexity. Considering scene image has large variation in text and background, we further design a modality-transform block to effectively transform 2D input images to 1D sequences, combined with the encoder to extract more discriminative features. NRTR achieves state-of-the-art or highly competitive performance on both regular and irregular benchmarks, while requires only a small fraction of training time compared to the best model from the literature (at least 8 times faster).
-
-
-
-
-
-## Dataset
-
-### Train Dataset
-
-| trainset | instance_num | repeat_num | source |
-| :-------: | :----------: | :--------: | :----: |
-| SynthText | 7266686 | 1 | synth |
-| Syn90k | 8919273 | 1 | synth |
-
-### Test Dataset
-
-| testset | instance_num | type |
-| :-----: | :----------: | :-------: |
-| IIIT5K | 3000 | regular |
-| SVT | 647 | regular |
-| IC13 | 1015 | regular |
-| IC15 | 2077 | irregular |
-| SVTP | 645 | irregular |
-| CT80 | 288 | irregular |
-
-## Results and Models
-
-| Methods | Backbone | | Regular Text | | | | Irregular Text | | download |
-| :---------------------------------------------------------: | :-------------------: | :----: | :----------: | :-------: | :-: | :-------: | :------------: | :----: | :-----------------------------------------------------------: |
-| | | IIIT5K | SVT | IC13-1015 | | IC15-2077 | SVTP | CT80 | |
-| [NRTR](/configs/textrecog/nrtr/nrtr_modality-transform_6e_st_mj.py) | NRTRModalityTransform | 0.9147 | 0.8841 | 0.9369 | | 0.7246 | 0.7783 | 0.7500 | [model](https://download.openmmlab.com/mmocr/textrecog/nrtr/nrtr_modality-transform_6e_st_mj/nrtr_modality-transform_6e_st_mj_20220916_103322-bd9425be.pth) \| [log](https://download.openmmlab.com/mmocr/textrecog/nrtr/nrtr_modality-transform_6e_st_mj/20220916_103322.log) |
-| [NRTR-TTA](/configs/textrecog/nrtr/nrtr_modality-transform_6e_st_mj.py) | NRTRModalityTransform | 0.9123 | 0.8825 | 0.9310 | | 0.7492 | 0.7798 | 0.7535 | |
-| [NRTR](/configs/textrecog/nrtr/nrtr_resnet31-1by8-1by4_6e_st_mj.py) | R31-1/8-1/4 | 0.9483 | 0.8918 | 0.9507 | | 0.7578 | 0.8016 | 0.8889 | [model](https://download.openmmlab.com/mmocr/textrecog/nrtr/nrtr_resnet31-1by8-1by4_6e_st_mj/nrtr_resnet31-1by8-1by4_6e_st_mj_20220916_103322-a6a2a123.pth) \| [log](https://download.openmmlab.com/mmocr/textrecog/nrtr/nrtr_resnet31-1by8-1by4_6e_st_mj/20220916_103322.log) |
-| [NRTR-TTA](/configs/textrecog/nrtr/nrtr_resnet31-1by8-1by4_6e_st_mj.py) | R31-1/8-1/4 | 0.9443 | 0.8903 | 0.9478 | | 0.7790 | 0.8078 | 0.8854 | |
-| [NRTR](/configs/textrecog/nrtr/nrtr_resnet31-1by16-1by8_6e_st_mj.py) | R31-1/16-1/8 | 0.9470 | 0.8918 | 0.9399 | | 0.7376 | 0.7969 | 0.8854 | [model](https://download.openmmlab.com/mmocr/textrecog/nrtr/nrtr_resnet31-1by16-1by8_6e_st_mj/nrtr_resnet31-1by16-1by8_6e_st_mj_20220920_143358-43767036.pth) \| [log](https://download.openmmlab.com/mmocr/textrecog/nrtr/nrtr_resnet31-1by16-1by8_6e_st_mj/20220920_143358.log) |
-| [NRTR-TTA](/configs/textrecog/nrtr/nrtr_resnet31-1by16-1by8_6e_st_mj.py) | R31-1/16-1/8 | 0.9423 | 0.8903 | 0.9360 | | 0.7641 | 0.8016 | 0.8854 | |
-
-## Citation
-
-```bibtex
-@inproceedings{sheng2019nrtr,
- title={NRTR: A no-recurrence sequence-to-sequence model for scene text recognition},
- author={Sheng, Fenfen and Chen, Zhineng and Xu, Bo},
- booktitle={2019 International Conference on Document Analysis and Recognition (ICDAR)},
- pages={781--786},
- year={2019},
- organization={IEEE}
-}
-```
diff --git a/spaces/NCTCMumbai/NCTC/models/research/audioset/vggish/vggish_smoke_test.py b/spaces/NCTCMumbai/NCTC/models/research/audioset/vggish/vggish_smoke_test.py
deleted file mode 100644
index f27e583aee473c6a04a5af20fd101c7a54871e94..0000000000000000000000000000000000000000
--- a/spaces/NCTCMumbai/NCTC/models/research/audioset/vggish/vggish_smoke_test.py
+++ /dev/null
@@ -1,98 +0,0 @@
-# Copyright 2017 The TensorFlow Authors All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-# ==============================================================================
-
-"""A smoke test for VGGish.
-
-This is a simple smoke test of a local install of VGGish and its associated
-downloaded files. We create a synthetic sound, extract log mel spectrogram
-features, run them through VGGish, post-process the embedding ouputs, and
-check some simple statistics of the results, allowing for variations that
-might occur due to platform/version differences in the libraries we use.
-
-Usage:
-- Download the VGGish checkpoint and PCA parameters into the same directory as
- the VGGish source code. If you keep them elsewhere, update the checkpoint_path
- and pca_params_path variables below.
-- Run:
- $ python vggish_smoke_test.py
-"""
-
-from __future__ import print_function
-
-import numpy as np
-import tensorflow.compat.v1 as tf
-tf.disable_v2_behavior()
-
-import vggish_input
-import vggish_params
-import vggish_postprocess
-import vggish_slim
-
-print('\nTesting your install of VGGish\n')
-
-# Paths to downloaded VGGish files.
-checkpoint_path = 'vggish_model.ckpt'
-pca_params_path = 'vggish_pca_params.npz'
-
-# Relative tolerance of errors in mean and standard deviation of embeddings.
-rel_error = 0.1 # Up to 10%
-
-# Generate a 1 kHz sine wave at 44.1 kHz (we use a high sampling rate
-# to test resampling to 16 kHz during feature extraction).
-num_secs = 3
-freq = 1000
-sr = 44100
-t = np.linspace(0, num_secs, int(num_secs * sr))
-x = np.sin(2 * np.pi * freq * t)
-
-# Produce a batch of log mel spectrogram examples.
-input_batch = vggish_input.waveform_to_examples(x, sr)
-print('Log Mel Spectrogram example: ', input_batch[0])
-np.testing.assert_equal(
- input_batch.shape,
- [num_secs, vggish_params.NUM_FRAMES, vggish_params.NUM_BANDS])
-
-# Define VGGish, load the checkpoint, and run the batch through the model to
-# produce embeddings.
-with tf.Graph().as_default(), tf.Session() as sess:
- vggish_slim.define_vggish_slim()
- vggish_slim.load_vggish_slim_checkpoint(sess, checkpoint_path)
-
- features_tensor = sess.graph.get_tensor_by_name(
- vggish_params.INPUT_TENSOR_NAME)
- embedding_tensor = sess.graph.get_tensor_by_name(
- vggish_params.OUTPUT_TENSOR_NAME)
- [embedding_batch] = sess.run([embedding_tensor],
- feed_dict={features_tensor: input_batch})
- print('VGGish embedding: ', embedding_batch[0])
- expected_embedding_mean = 0.131
- expected_embedding_std = 0.238
- np.testing.assert_allclose(
- [np.mean(embedding_batch), np.std(embedding_batch)],
- [expected_embedding_mean, expected_embedding_std],
- rtol=rel_error)
-
-# Postprocess the results to produce whitened quantized embeddings.
-pproc = vggish_postprocess.Postprocessor(pca_params_path)
-postprocessed_batch = pproc.postprocess(embedding_batch)
-print('Postprocessed VGGish embedding: ', postprocessed_batch[0])
-expected_postprocessed_mean = 123.0
-expected_postprocessed_std = 75.0
-np.testing.assert_allclose(
- [np.mean(postprocessed_batch), np.std(postprocessed_batch)],
- [expected_postprocessed_mean, expected_postprocessed_std],
- rtol=rel_error)
-
-print('\nLooks Good To Me!\n')
diff --git a/spaces/Nathanotal/stockholmHousingValuation/app.py b/spaces/Nathanotal/stockholmHousingValuation/app.py
deleted file mode 100644
index 75470f2fb120cf5bf9a865bf70d292578d4dc1d0..0000000000000000000000000000000000000000
--- a/spaces/Nathanotal/stockholmHousingValuation/app.py
+++ /dev/null
@@ -1,424 +0,0 @@
-import gradio as gr
-import numpy as np
-from PIL import Image
-import requests
-import pandas as pd
-import matplotlib.pyplot as plt
-import numpy as np
-import joblib
-import hopsworks
-from tqdm import tqdm
-import xgboost as xgb
-from geopy.geocoders import Nominatim
-from datetime import date
-from datetime import timedelta
-from autogluon.tabular import TabularPredictor
-import shutil
-
-# Login to hopsworks and get the feature store
-
-# streetName;number;sqm;rooms;soldDate;monthlyFee;monthlyCost;floor;yearBuilt;brf;agency;lat;lon;gdp;unemployment;interestRate
-columnHeaders = ['streetName','number','sqm','rooms','soldDate','monthlyFee','monthlyCost','floor','yearBuilt', 'brf','agency','lat','lon'] # ,'gdp','unemployment','interestRate'
-
-featureToMinMax = {
- 'sqm': (10, 800),
- 'rooms': (1, 20),
- 'monthlyFee': (0, 60000),
- 'monthlyCost': (0, 20000),
- 'floor': (-3, 35),
- 'yearBuilt': (1850, 2023),
- 'lat': (58.8, 60.2),
- 'lon': (17.5, 19.1),
- 'gdp': (505.1, 630.14),
- 'unemployment': (6.36, 8.66),
- 'interestRate': (-0.5, 2.64),
- 'number': (0, 300),
- 'soldDate': (2010, 2025)
- } # Extracted from the data
-
-featureToName = {
- 'number' : 'Street number',
- 'sqm' : 'Size of the apartment in square meters',
- 'rooms' : 'Number of rooms',
- 'monthlyFee' : 'Monthly fee',
- 'monthlyCost' : 'Monthly operating cost',
- 'floor' : 'Floor',
- 'yearBuilt' : 'Year built',
- 'streetName' : 'Name of street',
-}
-
-topAgencies = ['Fastighetsbyrån','Notar','Svensk Fastighetsförmedling','HusmanHagberg','Länsförsäkringar Fastighetsförmedling','Erik Olsson','SkandiaMäklarna','Svenska Mäklarhuset','Bjurfors','Mäklarhuset','BOSTHLM','Innerstadsspecialisten','MOHV','Mäklarringen','Historiska Hem','Södermäklarna','Karlsson & Uddare','UNIK Fastighetsförmedling','Edward & Partners','Widerlöv']
-
-def downloadAutogluonModel():
- # Download saved Autogluon model from Hopsworks
- project = hopsworks.login()
- mr = project.get_model_registry()
- temp = mr.get_model("ag_model_20230109", version=5)
- temp_ag_folder_path = temp.download()
- print(temp_ag_folder_path)
- moveFolder(temp_ag_folder_path)
-
- ag_model = TabularPredictor.load("AutogluonModels/ag_model_20230109") # '/ag_model_20230109'
-
- return ag_model
-
-
-def moveFolder(temp_ag_folder_path):
- # Move Autogluon model folder to the correct folder
- original = temp_ag_folder_path
- target = "AutogluonModels/"
- shutil.move(original, target)
-
-def downloadModel():
- # Download saved Autogluon model from Hopsworks
- project = hopsworks.login()
- mr = project.get_model_registry()
- temp = mr.get_model("xgboost_model", version=5)
- model_path = temp.download()
-
- xgb_model = joblib.load(model_path + "/xgboost_model.pkl")
- return xgb_model
-
-def getAddressInfo(streetName, number):
- streetName = cleanAddress(streetName)
- try:
- return getCoordinatesFromAddress(streetName, number)
- except AddressNotFound:
- return None, None
-
-# Adds the financial data to the apartment data
-def populateApartmentData(aptDf):
- print('Populating with financial data...')
- gdpDf = pd.read_csv(f'./data/historicalGDP.csv', sep=';')
- unemploymentDf = pd.read_csv(f'./data/historicalUnemployment.csv', sep=';')
- interestRateDf = pd.read_csv(f'./data/historicalInterest.csv', sep=';')
- gdpDf = interpolateTime(gdpDf)
- unemploymentDf = interpolateTime(unemploymentDf)
- interestRateDf = interpolateTime(interestRateDf)
- aptDf['gdp'] = aptDf['soldDate'].apply(getValueFromTime, args=(gdpDf,))
- aptDf['unemployment'] = aptDf['soldDate'].apply(getValueFromTime, args=(unemploymentDf,))
- aptDf['interestRate'] = aptDf['soldDate'].apply(getValueFromTime, args=(interestRateDf,))
- return aptDf
-
-def interpolateTime(df):
- df['date'] = pd.to_datetime(df['date'])
- df = df.set_index('date')
- df = df.resample('MS').mean()
- df = df.interpolate(method='time')
- return fixChange(df)
-
-def getValueFromTime(datetime, dataDf):
- # Get the value from the dataDf at the given datetime
- # If the datetime is not in the dataDf, print the datetime and return '0'
- # First, set the day of the datetime to the first day of the month
- # parse datetime to enable replacement
- datetime = pd.to_datetime(datetime)
- datetime = datetime.replace(day=1)
- try:
- return dataDf.loc[datetime, 'value']
- except KeyError:
- # Try adding one month
- nextMonth = datetime.month + 1
- if nextMonth > 12:
- datetime = datetime.replace(month=1)
- datetime = datetime.replace(year=datetime.year + 1)
-
-def fixChange(df):
- # Set change to be the difference between the current and previous price
- df['change'] = df['value'].diff()
- # If the change is Nan set it to 0
- df['change'] = df['change'].fillna(0)
-
- return df
-
-def cleanAddress(x):
- # Remove "-" from the street
- x = ''.join(x.split('-'))
- # Remove all zero width spaces, non-breaking spaces and non-breaking hyphens
- x = x.replace('\u200b', '')
- x = x.replace('\u00a0', '')
- x = x.replace('\u2011', '')
- # Remove all soft hyphens
- x = x.replace('\xad', '')
- x = x.replace('\u200c', '')
-
- x.strip()
- return x
-
-class AddressNotFound(Exception):
- pass
-
-def getCoordinatesFromAddress(streetName, number):
-
- HOST_ADDRESS = 'nominatim.openstreetmap.org'
- # HOST_PORT = '8080'
- EMAIL = 'nathan.allard@gmail.com'
- DOMAIN = HOST_ADDRESS # + ':' + HOST_PORT
- LOCATOR = Nominatim(user_agent=EMAIL, domain=DOMAIN, scheme='http', timeout=10)
-
- number = str(int(float(number)))
- address = f'{streetName} {number}, Stockholm'
-
- if number == '0':
- address = f'{streetName}, Stockholm'
-
- location = LOCATOR.geocode(address)
-
- if location is None:
- raise AddressNotFound
- else:
- # Return with a precision of 6 decimals (accuracy of <1 meter)
- lat = round(location.latitude, 6)
- lon = round(location.longitude, 6)
- return lat, lon
-
-def dateToFloat(date):
- year, month, day = str(date).split('-')
- day = day.split(' ')[0]
- return int(year) + int(month) / 12 + int(day) / 365
-
-def normalize(x, minVal, maxVal, feature):
- # Not fantastic
- res = (float(x) - minVal) / (maxVal - minVal)
- return min(max(res, 0), 1)
-
-def normalizeData(df):
- # Normalize select numerical values to a value between 0 and 1
- print('Normalizing data...')
- for feature, minMax in tqdm(featureToMinMax.items()):
- min = minMax[0]
- max = minMax[1]
- if feature == 'soldDate':
- df[feature] = df[feature].apply(lambda x: dateToFloat(x))
-
- df[feature] = df[feature].apply(lambda x: normalize(x, min, max, feature))
-
- return df
-
-def parsePrice(price):
- featureToMinMaxPrice = {
- 'price': (1.5e5, 7e7)
- }
- MIN = featureToMinMaxPrice['price'][0]
- MAX = featureToMinMaxPrice['price'][1]
-
- price = float(price)
- price = price * (MAX - MIN) + MIN
- return f'{addDotsToPrice(int(price))} SEK'
-
-def addDotsToPrice(price):
- # Takes an int like 1000000 and returns a string like 1.000.000
- toReturn = ''
- price = str(price)
- for i, c in enumerate(price):
- toReturn += c
- if (len(price) - i) % 3 == 1 and i != len(price) - 1 and c != '-':
- toReturn += '.'
- return toReturn
-
-
-
-def xgbFix(df):
- features_to_categorical = ["streetName", "brf", "agency"]
-
- features_to_float = ["number", "sqm", "rooms", "monthlyFee",
- "monthlyCost", "floor", "yearBuilt", "gdp", "unemployment",
- "interestRate", "lat", "lon", "soldDate"]
-
- df[features_to_categorical] = df[features_to_categorical].astype("category")
- df[features_to_float] = df[features_to_float].astype(float)
- return df
-
-
-model = downloadModel()
-autoModel = downloadAutogluonModel()
-
-def xgboostPred(df):
- # Drop categorical features
- df = df.drop(['streetName', 'brf', 'agency'], axis=1)
-
- # Save first row as a numpy array
-
- results = []
- for _,row in df.iterrows():
- input_list = row.to_numpy()
- res = model.predict(np.asarray(input_list).reshape(1, -1))
- results.append(res[0]) # This is not done in a good way
-
- return results
-
-def addExtraAgencyFun(df):
- # Make 20 copies of the first row with the 20 different top agencies in Sweden
- # Make a copy of the first row
- firstRow = df.iloc[0]
- # Make a list of the copies
- rows = [firstRow] * len(topAgencies)
- # Make a dataframe from the list
- df2 = pd.DataFrame(rows)
-
- # Add the top agencies to the dataframe
- for i, agency in enumerate(topAgencies):
- df2['agency'].iloc[i] = agency
-
- # Concatenate the two dataframes
- df = pd.concat([df, df2], ignore_index=True)
-
- return df
-
-def autoPred(df):
- df = addExtraAgencyFun(df)
- res = autoModel.predict(df)
-
- # Convert to a list
- res = res.tolist()
-
- # Get the last 20 values
- agencyResults = res[-20:]
- res = res[:-20]
-
- # Get the mean of the agencies
- agencyToResult = {agency:result for agency, result in zip(topAgencies, agencyResults)}
- for agency, result in agencyToResult.items():
- print(agency, str(result))
-
- # Get the top and bottom 3 agencies with the highest results
- sortedAgencies = sorted(agencyToResult.items(), key=lambda x: x[1])
- meanPrice = sum(agencyResults) / len(agencyResults)
- top3 = sortedAgencies[-5:]
- top3.reverse()
-
- agencyString = parseAgencyResult(top3, meanPrice)
-
- return res, agencyString
-
-def parseAgencyResult(top3, meanPrice):
- toReturn = 'To get the most money for your apartment, you should sell it with the help of one of these agencies:\n'
- toReturn += 'Top 5:\n'
- for agency, result in top3:
- diff = result - meanPrice
- toReturn += f'{agency}: {parsePrice(result)} ({parsePrice(diff)} above mean)\n'
-
- return toReturn
-
-def isValidInput(streetName, number, sqm, rooms, monthlyFee, monthlyCost, floor, yearBuilt):
- # Street name is a string, all other values are numbers
- if streetName == '':
- return 'Street name is empty'
- # If Street name contains numbers it should fail
- if any(char.isdigit() for char in streetName):
- return 'Only letters are allowed in street name'
-
- toCheck = [number, sqm, rooms, monthlyFee, monthlyCost, floor, yearBuilt]
- toCheckName = ['number', 'sqm', 'rooms', 'monthlyFee', 'monthlyCost', 'floor', 'yearBuilt']
- for val, name in zip(toCheck, toCheckName):
- MIN = featureToMinMax[name][0]
- MAX = featureToMinMax[name][1]
- if val < MIN:
- return f'{featureToName.get(name)} is too low'
- if val > MAX:
- return f'{featureToName.get(name)} is too high'
-
- return None
-
-def getDates():
- today = date.today()
- # inAMonth = today + timedelta(days=30)
- inAYear = today + timedelta(days=365)
- lastYear = today - timedelta(days=365)
- beforeUkraineWar = '2022-02-24'
- threeYearsAgo = today - timedelta(days=365*3)
-
- dateToExplanation = {
- today.strftime("%Y-%m-%d") : 'today',
- # inAMonth.strftime("%Y-%m-%d") : 'in a month',
- inAYear.strftime("%Y-%m-%d") : 'in a year',
- lastYear.strftime("%Y-%m-%d") : 'last year',
- threeYearsAgo.strftime("%Y-%m-%d") : 'three years ago',
- beforeUkraineWar : 'before Russia invaded Ukraine',
- }
-
- return dateToExplanation
-
-
-def sthlm(streetName, number, sqm, rooms, monthlyFee, monthlyCost, floor, yearBuilt, agency, auto):
- inputErrors = isValidInput(streetName, number, sqm, rooms, monthlyFee, monthlyCost, floor, yearBuilt)
- if inputErrors is not None:
- return '0', '', '', inputErrors
- lat, lon = getAddressInfo(streetName, number)
- # If none
- if lat is None or lon is None:
- return '0', '', '', 'Address not found in the OpenStreetMap dataset (Nominatim), please try another address'
-
- brf = 'BRF Kartboken 1' # Not used
- dates = getDates()
- input_variables = pd.DataFrame(
- columns=columnHeaders)
-
- for soldDate in dates.keys():
- # Parse the input so we can run it through the model
- # Create a dataframe from the input values
- input_variables = input_variables.append(
- pd.DataFrame(
- [[streetName,number,sqm,rooms,soldDate,monthlyFee,monthlyCost,floor,yearBuilt,brf,agency,lat,lon]], columns=columnHeaders))
-
- df = populateApartmentData(input_variables)
- df = normalizeData(df)
-
- pricePred = None
- agencyInfo = 'Please use AutoGluon instead of XGBoost to get information about agencies'
- if auto:
- pricePred, agencyInfo = autoPred(df)
- else:
- df = xgbFix(df)
- pricePred = xgboostPred(df)
-
- explanations = list(dates.values())
- result = [] #
- mainPred = None
- mainExplanation = None
- for i, pred in enumerate(pricePred):
- explanation = explanations[i]
- if i == 0:
- mainExplanation = explanation
- mainPred = pred
- else:
- diff = pred - mainPred
- if diff > 0:
- result.append(f'If sold {explanation} it would have been worth more: {parsePrice(pred)} (+{parsePrice(diff)})')
- else:
- result.append(f'If sold {explanation} it would have been worth less: {parsePrice(pred)} ({parsePrice(diff)})')
-
-
-
- return f'Predicted price of the apartment {mainExplanation}: {parsePrice(mainPred)}', '\n'.join(result), agencyInfo, ''
-
-
-
-# All features present in the sthlm dataset
-numericalInputs = ['number', 'sqm','rooms', 'monthlyFee','monthlyCost','floor','yearBuilt']
-inputs = [gr.inputs.Textbox(lines=1, label='streetName')]
-
-
-
-# Generate the input form
-for feature in numericalInputs:
- minVal = featureToMinMax[feature][0]
- maxVal = featureToMinMax[feature][1]
- theLabel = f'{featureToName.get(feature)} (min: {minVal}, max: {maxVal})'
- inputs.append(gr.inputs.Number(default=0, label=theLabel))
-
-# Add a switch to choose between xgboost and autogluon
-inputs.append(gr.inputs.Dropdown(label='Agency', choices=topAgencies, default='Notar'))
-inputs.append(gr.inputs.Checkbox( label='Use AutoGluon instead of XGBoost', default=False))
-# Create the interface
-resultOutputs = [gr.outputs.Label(label='Price if sold today'), gr.outputs.Textbox(label='If sold at a different time'), gr.outputs.Textbox(label='Best agencies to use'), gr.outputs.Textbox(label='Error').style(color='red')]
-
-demo = gr.Interface(
- fn=sthlm,
- title="Stockholm Housing Valuation",
- description="Predict the price of an apartment in Stockholm. To get information about which agency to use, please select AutoGluon",
- allow_flagging="never",
- inputs=inputs,
- outputs=resultOutputs)
-
-demo.launch()
diff --git a/spaces/NimaBoscarino/climategan/shared/template/resume_mila_victor.sh b/spaces/NimaBoscarino/climategan/shared/template/resume_mila_victor.sh
deleted file mode 100644
index 2a5bcac63bdf841406afc9718a31dcfc8bf4df33..0000000000000000000000000000000000000000
--- a/spaces/NimaBoscarino/climategan/shared/template/resume_mila_victor.sh
+++ /dev/null
@@ -1,24 +0,0 @@
-#!/bin/bash
-#SBATCH --partition={partition}
-#SBATCH --cpus-per-task={cpus}
-#SBATCH --mem={mem}
-#SBATCH --gres={gres}
-#SBATCH --output={output}
-
-module purge
-
-{modules}
-
-{conda}
-
-export PYTHONUNBUFFERED=1
-
-cd {codeloc}
-
-echo "Currently using:"
-echo $(which python)
-echo "in:"
-echo $(pwd)
-echo "sbatch file: $0"
-
-python resume.py --path {resume}
\ No newline at end of file
diff --git a/spaces/Nultx/VITS-TTS/text/cantonese.py b/spaces/Nultx/VITS-TTS/text/cantonese.py
deleted file mode 100644
index b66d12138b81b70b86f18217d24a08fce76305c0..0000000000000000000000000000000000000000
--- a/spaces/Nultx/VITS-TTS/text/cantonese.py
+++ /dev/null
@@ -1,59 +0,0 @@
-import re
-import cn2an
-import opencc
-
-
-converter = opencc.OpenCC('jyutjyu')
-
-# List of (Latin alphabet, ipa) pairs:
-_latin_to_ipa = [(re.compile('%s' % x[0]), x[1]) for x in [
- ('A', 'ei˥'),
- ('B', 'biː˥'),
- ('C', 'siː˥'),
- ('D', 'tiː˥'),
- ('E', 'iː˥'),
- ('F', 'e˥fuː˨˩'),
- ('G', 'tsiː˥'),
- ('H', 'ɪk̚˥tsʰyː˨˩'),
- ('I', 'ɐi˥'),
- ('J', 'tsei˥'),
- ('K', 'kʰei˥'),
- ('L', 'e˥llou˨˩'),
- ('M', 'ɛːm˥'),
- ('N', 'ɛːn˥'),
- ('O', 'ou˥'),
- ('P', 'pʰiː˥'),
- ('Q', 'kʰiːu˥'),
- ('R', 'aː˥lou˨˩'),
- ('S', 'ɛː˥siː˨˩'),
- ('T', 'tʰiː˥'),
- ('U', 'juː˥'),
- ('V', 'wiː˥'),
- ('W', 'tʊk̚˥piː˥juː˥'),
- ('X', 'ɪk̚˥siː˨˩'),
- ('Y', 'waːi˥'),
- ('Z', 'iː˨sɛːt̚˥')
-]]
-
-
-def number_to_cantonese(text):
- return re.sub(r'\d+(?:\.?\d+)?', lambda x: cn2an.an2cn(x.group()), text)
-
-
-def latin_to_ipa(text):
- for regex, replacement in _latin_to_ipa:
- text = re.sub(regex, replacement, text)
- return text
-
-
-def cantonese_to_ipa(text):
- text = number_to_cantonese(text.upper())
- text = converter.convert(text).replace('-','').replace('$',' ')
- text = re.sub(r'[A-Z]', lambda x: latin_to_ipa(x.group())+' ', text)
- text = re.sub(r'[、;:]', ',', text)
- text = re.sub(r'\s*,\s*', ', ', text)
- text = re.sub(r'\s*。\s*', '. ', text)
- text = re.sub(r'\s*?\s*', '? ', text)
- text = re.sub(r'\s*!\s*', '! ', text)
- text = re.sub(r'\s*$', '', text)
- return text
diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/criss/save_encoder.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/criss/save_encoder.py
deleted file mode 100644
index 24a842e4092663c79c92a299fa85747b7c0bed64..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/criss/save_encoder.py
+++ /dev/null
@@ -1,214 +0,0 @@
-#!/usr/bin/env python3 -u
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-"""
-Translate pre-processed data with a trained model.
-"""
-
-import numpy as np
-import torch
-from fairseq import checkpoint_utils, options, progress_bar, tasks, utils
-from fairseq.sequence_generator import EnsembleModel
-from fairseq.utils import safe_hasattr
-
-
-def get_avg_pool(
- models, sample, prefix_tokens, src_dict, remove_bpe, has_langtok=False
-):
- model = EnsembleModel(models)
-
- # model.forward normally channels prev_output_tokens into the decoder
- # separately, but SequenceGenerator directly calls model.encoder
- encoder_input = {
- k: v for k, v in sample["net_input"].items() if k != "prev_output_tokens"
- }
-
- # compute the encoder output for each beam
- encoder_outs = model.forward_encoder(encoder_input)
- np_encoder_outs = encoder_outs[0].encoder_out.cpu().numpy().astype(np.float32)
- encoder_mask = 1 - encoder_outs[0].encoder_padding_mask.cpu().numpy().astype(
- np.float32
- )
- encoder_mask = np.expand_dims(encoder_mask.T, axis=2)
- if has_langtok:
- encoder_mask = encoder_mask[1:, :, :]
- np_encoder_outs = np_encoder_outs[1, :, :]
- masked_encoder_outs = encoder_mask * np_encoder_outs
- avg_pool = (masked_encoder_outs / encoder_mask.sum(axis=0)).sum(axis=0)
- return avg_pool
-
-
-def main(args):
- assert args.path is not None, "--path required for generation!"
- assert (
- not args.sampling or args.nbest == args.beam
- ), "--sampling requires --nbest to be equal to --beam"
- assert (
- args.replace_unk is None or args.raw_text
- ), "--replace-unk requires a raw text dataset (--raw-text)"
-
- args.beam = 1
- utils.import_user_module(args)
-
- if args.max_tokens is None:
- args.max_tokens = 12000
- print(args)
- use_cuda = torch.cuda.is_available() and not args.cpu
-
- # Load dataset splits
- task = tasks.setup_task(args)
- task.load_dataset(args.gen_subset)
-
- # Set dictionaries
- try:
- src_dict = getattr(task, "source_dictionary", None)
- except NotImplementedError:
- src_dict = None
- tgt_dict = task.target_dictionary
-
- # Load ensemble
- print("| loading model(s) from {}".format(args.path))
- models, _model_args = checkpoint_utils.load_model_ensemble(
- args.path.split(":"),
- arg_overrides=eval(args.model_overrides),
- task=task,
- )
-
- # Optimize ensemble for generation
- for model in models:
- model.make_generation_fast_(
- beamable_mm_beam_size=None if args.no_beamable_mm else args.beam,
- need_attn=args.print_alignment,
- )
- if args.fp16:
- model.half()
- if use_cuda:
- model.cuda()
-
- # Load alignment dictionary for unknown word replacement
- # (None if no unknown word replacement, empty if no path to align dictionary)
- align_dict = utils.load_align_dict(args.replace_unk)
-
- # Load dataset (possibly sharded)
- itr = task.get_batch_iterator(
- dataset=task.dataset(args.gen_subset),
- max_tokens=args.max_tokens,
- max_positions=utils.resolve_max_positions(
- task.max_positions(),
- ),
- ignore_invalid_inputs=args.skip_invalid_size_inputs_valid_test,
- required_batch_size_multiple=args.required_batch_size_multiple,
- num_shards=args.num_shards,
- shard_id=args.shard_id,
- num_workers=args.num_workers,
- ).next_epoch_itr(shuffle=False)
-
- num_sentences = 0
- source_sentences = []
- shard_id = 0
- all_avg_pool = None
- encoder_has_langtok = (
- safe_hasattr(task.args, "encoder_langtok")
- and task.args.encoder_langtok is not None
- and safe_hasattr(task.args, "lang_tok_replacing_bos_eos")
- and not task.args.lang_tok_replacing_bos_eos
- )
- with progress_bar.build_progress_bar(args, itr) as t:
- for sample in t:
- if sample is None:
- print("Skipping None")
- continue
- sample = utils.move_to_cuda(sample) if use_cuda else sample
- if "net_input" not in sample:
- continue
-
- prefix_tokens = None
- if args.prefix_size > 0:
- prefix_tokens = sample["target"][:, : args.prefix_size]
-
- with torch.no_grad():
- avg_pool = get_avg_pool(
- models,
- sample,
- prefix_tokens,
- src_dict,
- args.post_process,
- has_langtok=encoder_has_langtok,
- )
- if all_avg_pool is not None:
- all_avg_pool = np.concatenate((all_avg_pool, avg_pool))
- else:
- all_avg_pool = avg_pool
-
- if not isinstance(sample["id"], list):
- sample_ids = sample["id"].tolist()
- else:
- sample_ids = sample["id"]
- for i, sample_id in enumerate(sample_ids):
- # Remove padding
- src_tokens = utils.strip_pad(
- sample["net_input"]["src_tokens"][i, :], tgt_dict.pad()
- )
-
- # Either retrieve the original sentences or regenerate them from tokens.
- if align_dict is not None:
- src_str = task.dataset(args.gen_subset).src.get_original_text(
- sample_id
- )
- else:
- if src_dict is not None:
- src_str = src_dict.string(src_tokens, args.post_process)
- else:
- src_str = ""
-
- if not args.quiet:
- if src_dict is not None:
- print("S-{}\t{}".format(sample_id, src_str))
-
- source_sentences.append(f"{sample_id}\t{src_str}")
-
- num_sentences += sample["nsentences"]
- if all_avg_pool.shape[0] >= 1000000:
- with open(
- f"{args.encoder_save_dir}/all_avg_pool.{args.source_lang}.{shard_id}",
- "w",
- ) as avg_pool_file:
- all_avg_pool.tofile(avg_pool_file)
- with open(
- f"{args.encoder_save_dir}/sentences.{args.source_lang}.{shard_id}",
- "w",
- ) as sentence_file:
- sentence_file.writelines(f"{line}\n" for line in source_sentences)
- all_avg_pool = None
- source_sentences = []
- shard_id += 1
-
- if all_avg_pool is not None:
- with open(
- f"{args.encoder_save_dir}/all_avg_pool.{args.source_lang}.{shard_id}", "w"
- ) as avg_pool_file:
- all_avg_pool.tofile(avg_pool_file)
- with open(
- f"{args.encoder_save_dir}/sentences.{args.source_lang}.{shard_id}", "w"
- ) as sentence_file:
- sentence_file.writelines(f"{line}\n" for line in source_sentences)
- return None
-
-
-def cli_main():
- parser = options.get_generation_parser()
- parser.add_argument(
- "--encoder-save-dir",
- default="",
- type=str,
- metavar="N",
- help="directory to save encoder outputs",
- )
- args = options.parse_args_and_arch(parser)
- main(args)
-
-
-if __name__ == "__main__":
- cli_main()
diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/modules/lightconv_layer/cuda_function_gen.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/modules/lightconv_layer/cuda_function_gen.py
deleted file mode 100644
index a25433dd8edae2f0b52d7d0eeeb829cabc6b4b89..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/modules/lightconv_layer/cuda_function_gen.py
+++ /dev/null
@@ -1,289 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-
-def gen_forward():
-
- kernels = [3, 5, 7, 15, 31, 63, 127, 255]
- seqs = [32 * x for x in [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16]]
-
- head = """
-/**
- * Copyright (c) Facebook, Inc. and its affiliates.
- *
- * This source code is licensed under the MIT license found in the
- * LICENSE file in the root directory of this source tree.
- */
-
-#include "lightconv_cuda.cuh"
-
-std::vector lightconv_cuda_forward(at::Tensor input, at::Tensor filters, int padding_l) {
-
- at::DeviceGuard g(input.device());
- const auto minibatch = input.size(0);
- const auto numFeatures = input.size(1);
- const auto sequenceLength = input.size(2);
-
- const auto numHeads = filters.size(0);
- const auto filterSize = filters.size(1);
-
- const auto numFiltersInBlock = numFeatures / numHeads;
-
- const dim3 blocks(minibatch, numFeatures);
-
- auto output = at::zeros_like(input);
- auto stream = at::cuda::getCurrentCUDAStream();
-"""
-
- sequence_if = """
- if (sequenceLength <= {seq}) {{
- switch(filterSize) {{
-"""
-
- case_k = """
- case {k}:
-"""
-
- main_block = """
- if (padding_l == {pad}) {{
- AT_DISPATCH_FLOATING_TYPES_AND_HALF(input.scalar_type(), "lightconv_forward", ([&] {{
- lightconv_forward_kernel<{k}, {b_size}, {pad}, scalar_t>
- <<>>(
- input.data(),
- filters.data(),
- minibatch,
- sequenceLength,
- numFeatures,
- numFiltersInBlock,
- output.data());
- }}));
- }} else
-"""
-
- bad_padding = """
- {
- std::cout << "WARNING: Unsupported padding size - skipping forward pass" << std::endl;
- }
- break;
-"""
-
- bad_filter = """
- default:
- std::cout << "WARNING: Unsupported filter length passed - skipping forward pass" << std::endl;
- }
-"""
-
- con_else = """
- } else
-"""
-
- final_else = """
- {
- switch(filterSize) {
-"""
-
- final_return = """
- }
-
- return {output};
-}
-"""
-
- with open("lightconv_cuda_forward.cu", "w") as forward:
- forward.write(head)
- for seq in seqs:
- forward.write(sequence_if.format(seq=seq))
- for k in kernels:
- forward.write(case_k.format(k=k))
- for pad in [k // 2, k - 1]:
- forward.write(main_block.format(k=k, b_size=seq, pad=pad))
- forward.write(bad_padding)
- forward.write(bad_filter)
- forward.write(con_else)
-
- forward.write(final_else)
- for k in kernels:
- forward.write(case_k.format(k=k))
- for pad in [k // 2, k - 1]:
- forward.write(main_block.format(k=k, b_size=seq, pad=pad))
- forward.write(bad_padding)
- forward.write(bad_filter)
- forward.write(final_return)
-
-
-def gen_backward():
-
- head = """
-/**
- * Copyright (c) Facebook, Inc. and its affiliates.
- *
- * This source code is licensed under the MIT license found in the
- * LICENSE file in the root directory of this source tree.
- */
-
-#include "lightconv_cuda.cuh"
-
-std::vector lightconv_cuda_backward(
- at::Tensor gradOutput,
- int padding_l,
- at::Tensor input,
- at::Tensor filters) {
-
- // gradWrtInput
- const int minibatch = input.size(0);
- const int numFeatures = input.size(1);
- const int sequenceLength = input.size(2);
-
- const int numHeads = filters.size(0);
- const int filterSize = filters.size(1);
-
- const dim3 gradBlocks(minibatch, numFeatures);
- const dim3 weightGradFirstpassShortBlocks(minibatch, numHeads);
- const dim3 weightGradSecondpassBlocks(numHeads, filterSize);
-
- const int numFiltersInBlock = numFeatures / numHeads;
-
- auto gradInput = at::zeros_like(input);
- auto gradFilters = at::zeros_like(filters);
-
- at::DeviceGuard g(input.device());
- auto stream = at::cuda::getCurrentCUDAStream();
-
- switch(filterSize) {
-"""
-
- sequence_if = """
- if (sequenceLength <= {seq}) {{
-"""
-
- case_k = """
- case {k}:
-"""
-
- main_block = """
- if (padding_l == {p}) {{
- AT_DISPATCH_FLOATING_TYPES_AND_HALF(input.scalar_type(), "lightconv_backward", ([&] {{
- lightconv_grad_wrt_input_kernel<{k}, {b_size}, {p}, scalar_t>
- <<>>(
- gradOutput.data(),
- filters.data(),
- minibatch,
- sequenceLength,
- numFeatures,
- numFiltersInBlock,
- gradInput.data());
-
-"""
-
- weight_grad_short = """
- at::Tensor tempSumGradFilters = at::zeros({{minibatch, numHeads, filterSize}}, input.options().dtype(at::kFloat));
- lightconv_grad_wrt_weights_firstpass_short_kernel<{k}, {b_size}, {p}, scalar_t>
- <<>>(
- input.data(),
- gradOutput.data(),
- minibatch,
- sequenceLength,
- numFeatures,
- numFiltersInBlock,
- numHeads,
- tempSumGradFilters.data()
- );
-
- lightconv_grad_wrt_weights_secondpass_short_kernel<{k}, {b_size}, scalar_t>
- <<>>(
- tempSumGradFilters.data(),
- minibatch,
- numFiltersInBlock,
- gradFilters.data()
- );
- }}));
- }} else
-"""
-
- weight_grad = """
- at::Tensor tempSumGradFilters = at::zeros({{minibatch, numFeatures, filterSize}}, input.options().dtype(at::kFloat));
- lightconv_grad_wrt_weights_firstpass_kernel<{k}, {b_size}, {p}, scalar_t>
- <<>>(
- input.data(),
- gradOutput.data(),
- minibatch,
- sequenceLength,
- numFeatures,
- numFiltersInBlock,
- tempSumGradFilters.data()
- );
-
- lightconv_grad_wrt_weights_secondpass_kernel<{k}, {b_size}, scalar_t>
- <<>>(
- tempSumGradFilters.data(),
- minibatch,
- numFiltersInBlock,
- gradFilters.data()
- );
- }}));
- }} else
-"""
-
- bad_padding = """
- {
- std::cout << "WARNING: Unsupported padding size - skipping backward pass" << std::endl;
- }
-"""
-
- breakout = """
- break;
-"""
-
- bad_filter = """
- default:
- std::cout << "WARNING: Unsupported filter length passed - skipping backward pass" << std::endl;
-"""
-
- con_else = """
- } else
-"""
-
- final_else = """
- {
- switch(filterSize) {
-"""
-
- last_return = """
- }
- return {gradInput, gradFilters};
-}
-"""
-
- kernels = [3, 5, 7, 15, 31, 63, 127, 255]
- seqs = [32 * x for x in [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16]]
- thresh = [32, 32, 64, 128, 256, -1, -1, -1]
- max_mem = [-1, -1, -1, -1, -1, 192, 96, 64]
-
- with open("lightconv_cuda_backward.cu", "w") as backward:
- backward.write(head)
- for (k, t, mem) in zip(kernels, thresh, max_mem):
- backward.write(case_k.format(k=k))
- for seq in seqs:
- if (t == -1 or seq <= t) and (mem == -1 or seq < mem):
- backward.write(sequence_if.format(seq=seq))
- for p in [k // 2, k - 1]:
- backward.write(main_block.format(k=k, b_size=seq, p=p))
- backward.write(weight_grad_short.format(k=k, b_size=seq, p=p))
- backward.write(bad_padding)
- else:
- for p in [k // 2, k - 1]:
- backward.write(main_block.format(k=k, b_size=32, p=p))
- backward.write(weight_grad.format(k=k, b_size=32, p=p))
- backward.write(bad_padding)
- backward.write(breakout)
- break
- backward.write(con_else)
- backward.write(bad_filter)
- backward.write(last_return)
-
-
-if __name__ == "__main__":
- gen_forward()
- gen_backward()
diff --git a/spaces/OFA-Sys/OFA-Visual_Grounding/fairseq/examples/m2m_100/install_dependecies.sh b/spaces/OFA-Sys/OFA-Visual_Grounding/fairseq/examples/m2m_100/install_dependecies.sh
deleted file mode 100644
index 82a1054745264a56fbec4a8eb593884f8a42bd08..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-Visual_Grounding/fairseq/examples/m2m_100/install_dependecies.sh
+++ /dev/null
@@ -1,78 +0,0 @@
-#!/usr/bin/env bash
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-
-CWD=`pwd`
-INSTALL_PATH=$CWD/tokenizers/thirdparty
-
-MOSES=$INSTALL_PATH/mosesdecoder
-if [ ! -d $MOSES ]; then
- echo 'Cloning Moses github repository (for tokenization scripts)...'
- git clone https://github.com/moses-smt/mosesdecoder.git $MOSES
- cd $MOSES
- # To deal with differences in handling ' vs "
- git checkout 03578921cc1a03402
- cd -
-fi
-
-WMT16_SCRIPTS=$INSTALL_PATH/wmt16-scripts
-if [ ! -d $WMT16_SCRIPTS ]; then
- echo 'Cloning Romanian tokenization scripts'
- git clone https://github.com/rsennrich/wmt16-scripts.git $WMT16_SCRIPTS
-fi
-
-KYTEA=$INSTALL_PATH/kytea
-if [ ! -f $KYTEA/bin/kytea ]; then
- git clone https://github.com/neubig/kytea.git $KYTEA
- cd $KYTEA
- autoreconf -i
- ./configure --prefix=`pwd`
- make
- make install
- cd ..
-fi
-
-export MECAB=$INSTALL_PATH/mecab-0.996-ko-0.9.2
-if [ ! -f $MECAB/bin/mecab ]; then
- cd $INSTALL_PATH
- curl -LO https://bitbucket.org/eunjeon/mecab-ko/downloads/mecab-0.996-ko-0.9.2.tar.gz
- tar zxfv mecab-0.996-ko-0.9.2.tar.gz
- cd mecab-0.996-ko-0.9.2/
- ./configure --prefix=`pwd`
- make
- make install
-
- cd ..
- curl -LO https://bitbucket.org/eunjeon/mecab-ko-dic/downloads/mecab-ko-dic-2.1.1-20180720.tar.gz
- tar zxfv mecab-ko-dic-2.1.1-20180720.tar.gz
- cd mecab-ko-dic-2.1.1-20180720/
- ./autogen.sh
- ./configure --prefix=`pwd` --with-dicdir=$MECAB/lib/mecab/dic/mecab-ko-dic --with-mecab-config=$MECAB/bin/mecab-config
- make
- sh -c 'echo "dicdir=$MECAB/lib/mecab/dic/mecab-ko-dic" > $MECAB/etc/mecabrc'
- make install
- cd $CWD
-fi
-
-INDIC_RESOURCES_PATH=$INSTALL_PATH/indic_nlp_resources
-if [ ! -d $INDIC_RESOURCES_PATH ]; then
- echo 'Cloning indic_nlp_resources'
- git clone https://github.com/anoopkunchukuttan/indic_nlp_resources.git $INDIC_RESOURCES_PATH
-fi
-
-
-if [ ! -f $INSTALL_PATH/seg_my.py ]; then
- cd $INSTALL_PATH
- wget http://lotus.kuee.kyoto-u.ac.jp/WAT/my-en-data/wat2020.my-en.zip
- unzip wat2020.my-en.zip
- # switch to python3
- cat wat2020.my-en/myseg.py |sed 's/^sys.std/###sys.std/g' | sed 's/### sys/sys/g' | sed 's/unichr/chr/g' > seg_my.py
- cd $CWD
-fi
-
-
-pip install pythainlp sacrebleu indic-nlp-library
-
diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/hubert/simple_kmeans/dump_mfcc_feature.py b/spaces/OFA-Sys/OFA-vqa/fairseq/examples/hubert/simple_kmeans/dump_mfcc_feature.py
deleted file mode 100644
index 70d0016663b7d0b90033f4eb301b527f2c92a3f8..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/hubert/simple_kmeans/dump_mfcc_feature.py
+++ /dev/null
@@ -1,78 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import logging
-import os
-import sys
-
-import soundfile as sf
-import torch
-import torchaudio
-
-from feature_utils import get_path_iterator, dump_feature
-
-logging.basicConfig(
- format="%(asctime)s | %(levelname)s | %(name)s | %(message)s",
- datefmt="%Y-%m-%d %H:%M:%S",
- level=os.environ.get("LOGLEVEL", "INFO").upper(),
- stream=sys.stdout,
-)
-logger = logging.getLogger("dump_mfcc_feature")
-
-
-class MfccFeatureReader(object):
- def __init__(self, sample_rate):
- self.sample_rate = sample_rate
-
- def read_audio(self, path, ref_len=None):
- wav, sr = sf.read(path)
- assert sr == self.sample_rate, sr
- if wav.ndim == 2:
- wav = wav.mean(-1)
- assert wav.ndim == 1, wav.ndim
- if ref_len is not None and abs(ref_len - len(wav)) > 160:
- logging.warning(f"ref {ref_len} != read {len(wav)} ({path})")
- return wav
-
- def get_feats(self, path, ref_len=None):
- x = self.read_audio(path, ref_len)
- with torch.no_grad():
- x = torch.from_numpy(x).float()
- x = x.view(1, -1)
-
- mfccs = torchaudio.compliance.kaldi.mfcc(
- waveform=x,
- sample_frequency=self.sample_rate,
- use_energy=False,
- ) # (time, freq)
- mfccs = mfccs.transpose(0, 1) # (freq, time)
- deltas = torchaudio.functional.compute_deltas(mfccs)
- ddeltas = torchaudio.functional.compute_deltas(deltas)
- concat = torch.cat([mfccs, deltas, ddeltas], dim=0)
- concat = concat.transpose(0, 1).contiguous() # (freq, time)
- return concat
-
-
-def main(tsv_dir, split, nshard, rank, feat_dir, sample_rate):
- reader = MfccFeatureReader(sample_rate)
- generator, num = get_path_iterator(f"{tsv_dir}/{split}.tsv", nshard, rank)
- dump_feature(reader, generator, num, split, nshard, rank, feat_dir)
-
-
-
-if __name__ == "__main__":
- import argparse
-
- parser = argparse.ArgumentParser()
- parser.add_argument("tsv_dir")
- parser.add_argument("split")
- parser.add_argument("nshard", type=int)
- parser.add_argument("rank", type=int)
- parser.add_argument("feat_dir")
- parser.add_argument("--sample_rate", type=int, default=16000)
- args = parser.parse_args()
- logger.info(args)
-
- main(**vars(args))
diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/textless_nlp/gslm/unit2speech/synthesize_audio_from_units.py b/spaces/OFA-Sys/OFA-vqa/fairseq/examples/textless_nlp/gslm/unit2speech/synthesize_audio_from_units.py
deleted file mode 100644
index f226d5f50514ecb5ee3b4f1031df750609a56112..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/textless_nlp/gslm/unit2speech/synthesize_audio_from_units.py
+++ /dev/null
@@ -1,97 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import argparse
-import logging
-import os
-
-import soundfile as sf
-from examples.textless_nlp.gslm.unit2speech.tts_data import (
- TacotronInputDataset,
-)
-from examples.textless_nlp.gslm.unit2speech.utils import (
- load_quantized_audio_from_file,
- load_tacotron,
- load_waveglow,
- synthesize_audio,
-)
-
-
-def get_logger():
- log_format = "[%(asctime)s] [%(levelname)s]: %(message)s"
- logging.basicConfig(format=log_format, level=logging.INFO)
- logger = logging.getLogger(__name__)
- return logger
-
-
-def get_parser():
- parser = argparse.ArgumentParser(
- description="Wav2Vec 2.0 speech generator."
- )
- parser.add_argument(
- "--quantized_unit_path",
- type=str,
- help="K-means model file path to use for inference",
- )
- parser.add_argument(
- "--tts_model_path",
- type=str,
- help="TTS model file path to use for inference",
- )
- parser.add_argument(
- "--waveglow_path",
- type=str,
- help="Path to the waveglow checkpoint (vocoder).",
- )
- parser.add_argument("--max_decoder_steps", type=int, default=2000)
- parser.add_argument("--denoiser_strength", type=float, default=0.1)
- parser.add_argument(
- "--out_audio_dir",
- type=str,
- help="Output directory to dump audio files",
- )
-
- return parser
-
-
-def main(args, logger):
- # Load quantized audio
- logger.info(f"Loading quantized audio from {args.quantized_unit_path}...")
- names_batch, quantized_units_batch = load_quantized_audio_from_file(
- file_path=args.quantized_unit_path
- )
-
- logger.info(f"Loading TTS model from {args.tts_model_path}...")
- tacotron_model, sample_rate, hparams = load_tacotron(
- tacotron_model_path=args.tts_model_path,
- max_decoder_steps=args.max_decoder_steps,
- )
-
- logger.info(f"Loading Waveglow model from {args.waveglow_path}...")
- waveglow, denoiser = load_waveglow(waveglow_path=args.waveglow_path)
-
- tts_dataset = TacotronInputDataset(hparams)
- for name, quantized_units in zip(names_batch, quantized_units_batch):
- quantized_units_str = " ".join(map(str, quantized_units))
- tts_input = tts_dataset.get_tensor(quantized_units_str)
- mel, aud, aud_dn, has_eos = synthesize_audio(
- tacotron_model,
- waveglow,
- denoiser,
- tts_input.unsqueeze(0),
- strength=args.denoiser_strength,
- )
- out_file_path = os.path.join(args.out_audio_dir, f"{name}.wav")
- sf.write(
- f"{out_file_path}", aud_dn[0].cpu().float().numpy(), sample_rate
- )
-
-
-if __name__ == "__main__":
- parser = get_parser()
- args = parser.parse_args()
- logger = get_logger()
- logger.info(args)
- main(args, logger)
diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/criterions/ctc.py b/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/criterions/ctc.py
deleted file mode 100644
index 10e3618382c86a84466cb4264d62f31537980251..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/criterions/ctc.py
+++ /dev/null
@@ -1,295 +0,0 @@
-# All rights reserved.
-#
-# This source code is licensed under the license found in the LICENSE file in
-# the root directory of this source tree. An additional grant of patent rights
-# can be found in the PATENTS file in the same directory.
-
-import math
-from argparse import Namespace
-from dataclasses import dataclass, field
-from omegaconf import II
-from typing import Optional
-
-import torch
-import torch.nn.functional as F
-from fairseq import metrics, utils
-from fairseq.criterions import FairseqCriterion, register_criterion
-from fairseq.dataclass import FairseqDataclass
-from fairseq.data.data_utils import post_process
-from fairseq.tasks import FairseqTask
-from fairseq.logging.meters import safe_round
-
-
-@dataclass
-class CtcCriterionConfig(FairseqDataclass):
- zero_infinity: bool = field(
- default=False,
- metadata={"help": "zero inf loss when source length <= target length"},
- )
- sentence_avg: bool = II("optimization.sentence_avg")
- post_process: str = field(
- default="letter",
- metadata={
- "help": "how to post process predictions into words. can be letter, "
- "wordpiece, BPE symbols, etc. "
- "See fairseq.data.data_utils.post_process() for full list of options"
- },
- )
- wer_kenlm_model: Optional[str] = field(
- default=None,
- metadata={
- "help": "if this is provided, use kenlm to compute wer (along with other wer_* args)"
- },
- )
- wer_lexicon: Optional[str] = field(
- default=None,
- metadata={"help": "lexicon to use with wer_kenlm_model"},
- )
- wer_lm_weight: float = field(
- default=2.0,
- metadata={"help": "lm weight to use with wer_kenlm_model"},
- )
- wer_word_score: float = field(
- default=-1.0,
- metadata={"help": "lm word score to use with wer_kenlm_model"},
- )
-
- wer_args: Optional[str] = field(
- default=None,
- metadata={
- "help": "DEPRECATED: tuple of (wer_kenlm_model, wer_lexicon, wer_lm_weight, wer_word_score)"
- },
- )
-
-
-@register_criterion("ctc", dataclass=CtcCriterionConfig)
-class CtcCriterion(FairseqCriterion):
- def __init__(self, cfg: CtcCriterionConfig, task: FairseqTask):
- super().__init__(task)
- self.blank_idx = (
- task.target_dictionary.index(task.blank_symbol)
- if hasattr(task, "blank_symbol")
- else 0
- )
- self.pad_idx = task.target_dictionary.pad()
- self.eos_idx = task.target_dictionary.eos()
- self.post_process = cfg.post_process
-
- if cfg.wer_args is not None:
- (
- cfg.wer_kenlm_model,
- cfg.wer_lexicon,
- cfg.wer_lm_weight,
- cfg.wer_word_score,
- ) = eval(cfg.wer_args)
-
- if cfg.wer_kenlm_model is not None:
- from examples.speech_recognition.w2l_decoder import W2lKenLMDecoder
-
- dec_args = Namespace()
- dec_args.nbest = 1
- dec_args.criterion = "ctc"
- dec_args.kenlm_model = cfg.wer_kenlm_model
- dec_args.lexicon = cfg.wer_lexicon
- dec_args.beam = 50
- dec_args.beam_size_token = min(50, len(task.target_dictionary))
- dec_args.beam_threshold = min(50, len(task.target_dictionary))
- dec_args.lm_weight = cfg.wer_lm_weight
- dec_args.word_score = cfg.wer_word_score
- dec_args.unk_weight = -math.inf
- dec_args.sil_weight = 0
-
- self.w2l_decoder = W2lKenLMDecoder(dec_args, task.target_dictionary)
- else:
- self.w2l_decoder = None
-
- self.zero_infinity = cfg.zero_infinity
- self.sentence_avg = cfg.sentence_avg
-
- def forward(self, model, sample, reduce=True):
- net_output = model(**sample["net_input"])
- lprobs = model.get_normalized_probs(
- net_output, log_probs=True
- ).contiguous() # (T, B, C) from the encoder
-
- if "src_lengths" in sample["net_input"]:
- input_lengths = sample["net_input"]["src_lengths"]
- else:
- if net_output["padding_mask"] is not None:
- non_padding_mask = ~net_output["padding_mask"]
- input_lengths = non_padding_mask.long().sum(-1)
- else:
- input_lengths = lprobs.new_full(
- (lprobs.size(1),), lprobs.size(0), dtype=torch.long
- )
-
- pad_mask = (sample["target"] != self.pad_idx) & (
- sample["target"] != self.eos_idx
- )
- targets_flat = sample["target"].masked_select(pad_mask)
- if "target_lengths" in sample:
- target_lengths = sample["target_lengths"]
- else:
- target_lengths = pad_mask.sum(-1)
-
- with torch.backends.cudnn.flags(enabled=False):
- loss = F.ctc_loss(
- lprobs,
- targets_flat,
- input_lengths,
- target_lengths,
- blank=self.blank_idx,
- reduction="sum",
- zero_infinity=self.zero_infinity,
- )
-
- ntokens = (
- sample["ntokens"] if "ntokens" in sample else target_lengths.sum().item()
- )
-
- sample_size = sample["target"].size(0) if self.sentence_avg else ntokens
- logging_output = {
- "loss": utils.item(loss.data), # * sample['ntokens'],
- "ntokens": ntokens,
- "nsentences": sample["id"].numel(),
- "sample_size": sample_size,
- }
-
- if not model.training:
- import editdistance
-
- with torch.no_grad():
- lprobs_t = lprobs.transpose(0, 1).float().contiguous().cpu()
-
- c_err = 0
- c_len = 0
- w_errs = 0
- w_len = 0
- wv_errs = 0
- for lp, t, inp_l in zip(
- lprobs_t,
- sample["target_label"]
- if "target_label" in sample
- else sample["target"],
- input_lengths,
- ):
- lp = lp[:inp_l].unsqueeze(0)
-
- decoded = None
- if self.w2l_decoder is not None:
- decoded = self.w2l_decoder.decode(lp)
- if len(decoded) < 1:
- decoded = None
- else:
- decoded = decoded[0]
- if len(decoded) < 1:
- decoded = None
- else:
- decoded = decoded[0]
-
- p = (t != self.task.target_dictionary.pad()) & (
- t != self.task.target_dictionary.eos()
- )
- targ = t[p]
- targ_units = self.task.target_dictionary.string(targ)
- targ_units_arr = targ.tolist()
-
- toks = lp.argmax(dim=-1).unique_consecutive()
- pred_units_arr = toks[toks != self.blank_idx].tolist()
-
- c_err += editdistance.eval(pred_units_arr, targ_units_arr)
- c_len += len(targ_units_arr)
-
- targ_words = post_process(targ_units, self.post_process).split()
-
- pred_units = self.task.target_dictionary.string(pred_units_arr)
- pred_words_raw = post_process(pred_units, self.post_process).split()
-
- if decoded is not None and "words" in decoded:
- pred_words = decoded["words"]
- w_errs += editdistance.eval(pred_words, targ_words)
- wv_errs += editdistance.eval(pred_words_raw, targ_words)
- else:
- dist = editdistance.eval(pred_words_raw, targ_words)
- w_errs += dist
- wv_errs += dist
-
- w_len += len(targ_words)
-
- logging_output["wv_errors"] = wv_errs
- logging_output["w_errors"] = w_errs
- logging_output["w_total"] = w_len
- logging_output["c_errors"] = c_err
- logging_output["c_total"] = c_len
-
- return loss, sample_size, logging_output
-
- @staticmethod
- def reduce_metrics(logging_outputs) -> None:
- """Aggregate logging outputs from data parallel training."""
-
- loss_sum = utils.item(sum(log.get("loss", 0) for log in logging_outputs))
- ntokens = utils.item(sum(log.get("ntokens", 0) for log in logging_outputs))
- nsentences = utils.item(
- sum(log.get("nsentences", 0) for log in logging_outputs)
- )
- sample_size = utils.item(
- sum(log.get("sample_size", 0) for log in logging_outputs)
- )
-
- metrics.log_scalar(
- "loss", loss_sum / sample_size / math.log(2), sample_size, round=3
- )
- metrics.log_scalar("ntokens", ntokens)
- metrics.log_scalar("nsentences", nsentences)
- if sample_size != ntokens:
- metrics.log_scalar(
- "nll_loss", loss_sum / ntokens / math.log(2), ntokens, round=3
- )
-
- c_errors = sum(log.get("c_errors", 0) for log in logging_outputs)
- metrics.log_scalar("_c_errors", c_errors)
- c_total = sum(log.get("c_total", 0) for log in logging_outputs)
- metrics.log_scalar("_c_total", c_total)
- w_errors = sum(log.get("w_errors", 0) for log in logging_outputs)
- metrics.log_scalar("_w_errors", w_errors)
- wv_errors = sum(log.get("wv_errors", 0) for log in logging_outputs)
- metrics.log_scalar("_wv_errors", wv_errors)
- w_total = sum(log.get("w_total", 0) for log in logging_outputs)
- metrics.log_scalar("_w_total", w_total)
-
- if c_total > 0:
- metrics.log_derived(
- "uer",
- lambda meters: safe_round(
- meters["_c_errors"].sum * 100.0 / meters["_c_total"].sum, 3
- )
- if meters["_c_total"].sum > 0
- else float("nan"),
- )
- if w_total > 0:
- metrics.log_derived(
- "wer",
- lambda meters: safe_round(
- meters["_w_errors"].sum * 100.0 / meters["_w_total"].sum, 3
- )
- if meters["_w_total"].sum > 0
- else float("nan"),
- )
- metrics.log_derived(
- "raw_wer",
- lambda meters: safe_round(
- meters["_wv_errors"].sum * 100.0 / meters["_w_total"].sum, 3
- )
- if meters["_w_total"].sum > 0
- else float("nan"),
- )
-
- @staticmethod
- def logging_outputs_can_be_summed() -> bool:
- """
- Whether the logging outputs returned by `forward` can be summed
- across workers prior to calling `reduce_metrics`. Setting this
- to True will improves distributed training speed.
- """
- return True
diff --git a/spaces/ORI-Muchim/RaidenTTS/attentions.py b/spaces/ORI-Muchim/RaidenTTS/attentions.py
deleted file mode 100644
index 86bc73b5fe98cc7b443e9078553920346c996707..0000000000000000000000000000000000000000
--- a/spaces/ORI-Muchim/RaidenTTS/attentions.py
+++ /dev/null
@@ -1,300 +0,0 @@
-import math
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-import commons
-from modules import LayerNorm
-
-
-class Encoder(nn.Module):
- def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., window_size=4, **kwargs):
- super().__init__()
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.window_size = window_size
-
- self.drop = nn.Dropout(p_dropout)
- self.attn_layers = nn.ModuleList()
- self.norm_layers_1 = nn.ModuleList()
- self.ffn_layers = nn.ModuleList()
- self.norm_layers_2 = nn.ModuleList()
- for i in range(self.n_layers):
- self.attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, window_size=window_size))
- self.norm_layers_1.append(LayerNorm(hidden_channels))
- self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout))
- self.norm_layers_2.append(LayerNorm(hidden_channels))
-
- def forward(self, x, x_mask):
- attn_mask = x_mask.unsqueeze(2) * x_mask.unsqueeze(-1)
- x = x * x_mask
- for i in range(self.n_layers):
- y = self.attn_layers[i](x, x, attn_mask)
- y = self.drop(y)
- x = self.norm_layers_1[i](x + y)
-
- y = self.ffn_layers[i](x, x_mask)
- y = self.drop(y)
- x = self.norm_layers_2[i](x + y)
- x = x * x_mask
- return x
-
-
-class Decoder(nn.Module):
- def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., proximal_bias=False, proximal_init=True, **kwargs):
- super().__init__()
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.proximal_bias = proximal_bias
- self.proximal_init = proximal_init
-
- self.drop = nn.Dropout(p_dropout)
- self.self_attn_layers = nn.ModuleList()
- self.norm_layers_0 = nn.ModuleList()
- self.encdec_attn_layers = nn.ModuleList()
- self.norm_layers_1 = nn.ModuleList()
- self.ffn_layers = nn.ModuleList()
- self.norm_layers_2 = nn.ModuleList()
- for i in range(self.n_layers):
- self.self_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, proximal_bias=proximal_bias, proximal_init=proximal_init))
- self.norm_layers_0.append(LayerNorm(hidden_channels))
- self.encdec_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout))
- self.norm_layers_1.append(LayerNorm(hidden_channels))
- self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout, causal=True))
- self.norm_layers_2.append(LayerNorm(hidden_channels))
-
- def forward(self, x, x_mask, h, h_mask):
- """
- x: decoder input
- h: encoder output
- """
- self_attn_mask = commons.subsequent_mask(x_mask.size(2)).to(device=x.device, dtype=x.dtype)
- encdec_attn_mask = h_mask.unsqueeze(2) * x_mask.unsqueeze(-1)
- x = x * x_mask
- for i in range(self.n_layers):
- y = self.self_attn_layers[i](x, x, self_attn_mask)
- y = self.drop(y)
- x = self.norm_layers_0[i](x + y)
-
- y = self.encdec_attn_layers[i](x, h, encdec_attn_mask)
- y = self.drop(y)
- x = self.norm_layers_1[i](x + y)
-
- y = self.ffn_layers[i](x, x_mask)
- y = self.drop(y)
- x = self.norm_layers_2[i](x + y)
- x = x * x_mask
- return x
-
-
-class MultiHeadAttention(nn.Module):
- def __init__(self, channels, out_channels, n_heads, p_dropout=0., window_size=None, heads_share=True, block_length=None, proximal_bias=False, proximal_init=False):
- super().__init__()
- assert channels % n_heads == 0
-
- self.channels = channels
- self.out_channels = out_channels
- self.n_heads = n_heads
- self.p_dropout = p_dropout
- self.window_size = window_size
- self.heads_share = heads_share
- self.block_length = block_length
- self.proximal_bias = proximal_bias
- self.proximal_init = proximal_init
- self.attn = None
-
- self.k_channels = channels // n_heads
- self.conv_q = nn.Conv1d(channels, channels, 1)
- self.conv_k = nn.Conv1d(channels, channels, 1)
- self.conv_v = nn.Conv1d(channels, channels, 1)
- self.conv_o = nn.Conv1d(channels, out_channels, 1)
- self.drop = nn.Dropout(p_dropout)
-
- if window_size is not None:
- n_heads_rel = 1 if heads_share else n_heads
- rel_stddev = self.k_channels**-0.5
- self.emb_rel_k = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev)
- self.emb_rel_v = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev)
-
- nn.init.xavier_uniform_(self.conv_q.weight)
- nn.init.xavier_uniform_(self.conv_k.weight)
- nn.init.xavier_uniform_(self.conv_v.weight)
- if proximal_init:
- with torch.no_grad():
- self.conv_k.weight.copy_(self.conv_q.weight)
- self.conv_k.bias.copy_(self.conv_q.bias)
-
- def forward(self, x, c, attn_mask=None):
- q = self.conv_q(x)
- k = self.conv_k(c)
- v = self.conv_v(c)
-
- x, self.attn = self.attention(q, k, v, mask=attn_mask)
-
- x = self.conv_o(x)
- return x
-
- def attention(self, query, key, value, mask=None):
- # reshape [b, d, t] -> [b, n_h, t, d_k]
- b, d, t_s, t_t = (*key.size(), query.size(2))
- query = query.view(b, self.n_heads, self.k_channels, t_t).transpose(2, 3)
- key = key.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3)
- value = value.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3)
-
- scores = torch.matmul(query / math.sqrt(self.k_channels), key.transpose(-2, -1))
- if self.window_size is not None:
- assert t_s == t_t, "Relative attention is only available for self-attention."
- key_relative_embeddings = self._get_relative_embeddings(self.emb_rel_k, t_s)
- rel_logits = self._matmul_with_relative_keys(query /math.sqrt(self.k_channels), key_relative_embeddings)
- scores_local = self._relative_position_to_absolute_position(rel_logits)
- scores = scores + scores_local
- if self.proximal_bias:
- assert t_s == t_t, "Proximal bias is only available for self-attention."
- scores = scores + self._attention_bias_proximal(t_s).to(device=scores.device, dtype=scores.dtype)
- if mask is not None:
- scores = scores.masked_fill(mask == 0, -1e4)
- if self.block_length is not None:
- assert t_s == t_t, "Local attention is only available for self-attention."
- block_mask = torch.ones_like(scores).triu(-self.block_length).tril(self.block_length)
- scores = scores.masked_fill(block_mask == 0, -1e4)
- p_attn = F.softmax(scores, dim=-1) # [b, n_h, t_t, t_s]
- p_attn = self.drop(p_attn)
- output = torch.matmul(p_attn, value)
- if self.window_size is not None:
- relative_weights = self._absolute_position_to_relative_position(p_attn)
- value_relative_embeddings = self._get_relative_embeddings(self.emb_rel_v, t_s)
- output = output + self._matmul_with_relative_values(relative_weights, value_relative_embeddings)
- output = output.transpose(2, 3).contiguous().view(b, d, t_t) # [b, n_h, t_t, d_k] -> [b, d, t_t]
- return output, p_attn
-
- def _matmul_with_relative_values(self, x, y):
- """
- x: [b, h, l, m]
- y: [h or 1, m, d]
- ret: [b, h, l, d]
- """
- ret = torch.matmul(x, y.unsqueeze(0))
- return ret
-
- def _matmul_with_relative_keys(self, x, y):
- """
- x: [b, h, l, d]
- y: [h or 1, m, d]
- ret: [b, h, l, m]
- """
- ret = torch.matmul(x, y.unsqueeze(0).transpose(-2, -1))
- return ret
-
- def _get_relative_embeddings(self, relative_embeddings, length):
- max_relative_position = 2 * self.window_size + 1
- # Pad first before slice to avoid using cond ops.
- pad_length = max(length - (self.window_size + 1), 0)
- slice_start_position = max((self.window_size + 1) - length, 0)
- slice_end_position = slice_start_position + 2 * length - 1
- if pad_length > 0:
- padded_relative_embeddings = F.pad(
- relative_embeddings,
- commons.convert_pad_shape([[0, 0], [pad_length, pad_length], [0, 0]]))
- else:
- padded_relative_embeddings = relative_embeddings
- used_relative_embeddings = padded_relative_embeddings[:,slice_start_position:slice_end_position]
- return used_relative_embeddings
-
- def _relative_position_to_absolute_position(self, x):
- """
- x: [b, h, l, 2*l-1]
- ret: [b, h, l, l]
- """
- batch, heads, length, _ = x.size()
- # Concat columns of pad to shift from relative to absolute indexing.
- x = F.pad(x, commons.convert_pad_shape([[0,0],[0,0],[0,0],[0,1]]))
-
- # Concat extra elements so to add up to shape (len+1, 2*len-1).
- x_flat = x.view([batch, heads, length * 2 * length])
- x_flat = F.pad(x_flat, commons.convert_pad_shape([[0,0],[0,0],[0,length-1]]))
-
- # Reshape and slice out the padded elements.
- x_final = x_flat.view([batch, heads, length+1, 2*length-1])[:, :, :length, length-1:]
- return x_final
-
- def _absolute_position_to_relative_position(self, x):
- """
- x: [b, h, l, l]
- ret: [b, h, l, 2*l-1]
- """
- batch, heads, length, _ = x.size()
- # padd along column
- x = F.pad(x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, length-1]]))
- x_flat = x.view([batch, heads, length**2 + length*(length -1)])
- # add 0's in the beginning that will skew the elements after reshape
- x_flat = F.pad(x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [length, 0]]))
- x_final = x_flat.view([batch, heads, length, 2*length])[:,:,:,1:]
- return x_final
-
- def _attention_bias_proximal(self, length):
- """Bias for self-attention to encourage attention to close positions.
- Args:
- length: an integer scalar.
- Returns:
- a Tensor with shape [1, 1, length, length]
- """
- r = torch.arange(length, dtype=torch.float32)
- diff = torch.unsqueeze(r, 0) - torch.unsqueeze(r, 1)
- return torch.unsqueeze(torch.unsqueeze(-torch.log1p(torch.abs(diff)), 0), 0)
-
-
-class FFN(nn.Module):
- def __init__(self, in_channels, out_channels, filter_channels, kernel_size, p_dropout=0., activation=None, causal=False):
- super().__init__()
- self.in_channels = in_channels
- self.out_channels = out_channels
- self.filter_channels = filter_channels
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.activation = activation
- self.causal = causal
-
- if causal:
- self.padding = self._causal_padding
- else:
- self.padding = self._same_padding
-
- self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size)
- self.conv_2 = nn.Conv1d(filter_channels, out_channels, kernel_size)
- self.drop = nn.Dropout(p_dropout)
-
- def forward(self, x, x_mask):
- x = self.conv_1(self.padding(x * x_mask))
- if self.activation == "gelu":
- x = x * torch.sigmoid(1.702 * x)
- else:
- x = torch.relu(x)
- x = self.drop(x)
- x = self.conv_2(self.padding(x * x_mask))
- return x * x_mask
-
- def _causal_padding(self, x):
- if self.kernel_size == 1:
- return x
- pad_l = self.kernel_size - 1
- pad_r = 0
- padding = [[0, 0], [0, 0], [pad_l, pad_r]]
- x = F.pad(x, commons.convert_pad_shape(padding))
- return x
-
- def _same_padding(self, x):
- if self.kernel_size == 1:
- return x
- pad_l = (self.kernel_size - 1) // 2
- pad_r = self.kernel_size // 2
- padding = [[0, 0], [0, 0], [pad_l, pad_r]]
- x = F.pad(x, commons.convert_pad_shape(padding))
- return x
diff --git "a/spaces/Omdena-Milan/milan-chapter-agrifoods/pages/\360\237\244\226 demo.py" "b/spaces/Omdena-Milan/milan-chapter-agrifoods/pages/\360\237\244\226 demo.py"
deleted file mode 100644
index 029dab9eabbba36172d12c1a01adc979f085ef4c..0000000000000000000000000000000000000000
--- "a/spaces/Omdena-Milan/milan-chapter-agrifoods/pages/\360\237\244\226 demo.py"
+++ /dev/null
@@ -1,344 +0,0 @@
-
-from pycaret.regression import load_model, predict_model
-import streamlit as st
-import pandas as pd
-import numpy as np
-
-from PIL import Image
-#image = Image.open('omdena_logo.png')
-#st.set_page_config(page_title='omdena-milan', page_icon=image)
-
-
-
-model1 = load_model('data/models/cereals_knn')
-model2= load_model('data/models/fruits&nuts_knn')
-model3 = load_model('data/models/grapes_olives_et')
-model4 = load_model('data/models/fresh_veg_et')
-model5 = load_model('data/models/industrial_crop_et')
-
-
-def predict1(model1, input_df1):
- predictions_df1 = predict_model(estimator=model1, data=input_df1)
- predictions1 = predictions_df1['prediction_label'][0]
- return predictions1
-
-def predict2(model2, input_df2):
- predictions_df2= predict_model(estimator=model2, data=input_df2)
- predictions2 = predictions_df2['prediction_label'][0]
- return predictions2
-
-def predict3(model3, input_df3):
- predictions_df3 = predict_model(estimator=model3, data=input_df3)
- predictions3 = predictions_df3['prediction_label'][0]
- return predictions3
-
-def predict4(model4, input_df4):
- predictions_df4= predict_model(estimator=model4, data=input_df4)
- predictions4 = predictions_df4['prediction_label'][0]
- return predictions4
-
-def predict5(model5, input_df5):
- predictions_df5= predict_model(estimator=model5, data=input_df5)
- predictions5 = predictions_df5['prediction_label'][0]
- return predictions5
-
-
-def run():
-
-
-
- add_selectbox = st.sidebar.selectbox(
- "Please choose your crop",
- ("Cereal & Legumes", "Fruits & Nuts", "Grapes & Olives", "Fresh Vegetables","Industrial crops", ))
-
- st.sidebar.info('Omdena-Milan Agrifood')
- st.sidebar.success('https://omdena.com/local-chapters/milan-italy-chapter/')
-
- st.title("Crop Prediction")
-
- if add_selectbox == 'Cereal & Legumes':
-
- temperature_max = st.number_input('Temperature max (°C)', min_value= 20, max_value= 50)
-
- temperature_min = st.number_input('Temperature min (°C)', min_value= -5, max_value= 20)
-
- relative_humidity = st.number_input('Relative humidity (%)', min_value=0, max_value=100)
-
- root_moisture = st.number_input('Root moisture', max_value=1)
-
- total_area_ha = st.number_input('Total area(ha)', min_value=0, max_value=6150)
-
- fertilizer_tonnes = st.number_input('Fertilizer (tonnes)', min_value=0, max_value=3405)
-
- fertilizer = st.selectbox('Type of fertilizer', ['calcium cyanamide', 'nitrogen-potassium', 'peaty-amend',
- 'organic-nitrogen', 'organic', 'ammonium sulphate',
- 'nitrogen-phosphorous', 'phosphorus-potassium', 'urea'])
-
- crop = st.selectbox('Type of crop', ['barley', 'bro-bean', 'chick-peas', 'dry-k-bean', 'd-wheat',
- 'early potatoes', 'lentil', 'oats', 'potatoes', 'grain pea',
- 'oats mix', 'spring barley', 'winter barley', 'c-wheat', 'maize',
- 'protein pea', 'rice', 'sorghum', 'sugar beet', 'other cereals',
- 'rye', 'titicale', 'c-spr-wheat&spelt', 'c-wint-wheat&spelt',
- 'sweet potatoes', 'sweet lupin', 'rye mix', 'cereal mix',
- 'wint-cereal-mix'])
-
- city = st.selectbox('City', ['Agrigento', 'Alessandria', 'Ancona', 'Arezzo', 'Ascoli Piceno',
- 'Asti', 'Avellino', 'Bari', 'Barletta-Andria-Trani', 'Belluno',
- 'Benevento', 'Bergamo', 'Biella', 'Bologna', 'Bolzano / Bozen',
- 'Brescia', 'Brindisi', 'Cagliari', 'Caltanissetta', 'Campobasso',
- 'Carbonia-Iglesias', 'Caserta', 'Catania', 'Catanzaro', 'Chieti',
- 'Como', 'Cosenza', 'Cremona', 'Crotone', 'Cuneo', 'Enna', 'Fermo',
- 'Ferrara', 'Firenze', 'Foggia', 'Forlì-Cesena', 'Frosinone',
- 'Genova', 'Gorizia', 'Grosseto', 'Imperia', 'Isernia', "L'Aquila",
- 'La Spezia', 'Latina', 'Lecce', 'Lecco', 'Livorno', 'Lodi',
- 'Lucca', 'Macerata', 'Mantova', 'Massa-Carrara', 'Matera',
- 'Medio Campidano', 'Messina', 'Milano', 'Modena',
- 'Monza e della Brianza', 'Napoli', 'Novara', 'Nuoro', 'Ogliastra',
- 'Olbia-Tempio', 'Oristano', 'Padova', 'Palermo', 'Parma', 'Pavia',
- 'Perugia', 'Pesaro e Urbino', 'Pescara', 'Piacenza', 'Pisa',
- 'Pistoia', 'Pordenone', 'Potenza', 'Prato', 'Ragusa', 'Ravenna',
- 'Reggio di Calabria', "Reggio nell'Emilia", 'Rieti', 'Rimini',
- 'Roma', 'Rovigo', 'Salerno', 'Sassari', 'Savona', 'Siena',
- 'Siracusa', 'Sondrio', 'Sud Sardegna', 'Taranto', 'Teramo',
- 'Terni', 'Torino', 'Trapani', 'Trentino Alto Adige / Südtirol',
- 'Trento', 'Treviso', 'Trieste', 'Udine',"Valle d'Aosta / Vallée d'Aoste",
- 'Varese', 'Venezia', 'Verbano-Cusio-Ossola', 'Vercelli',
- 'Verona', 'Vibo Valentia', 'Vicenza', 'Viterbo'])
- output1=""
-
- input_dict1 = {'T2M_MAX': temperature_max, 'T2M_MIN':temperature_min,'RH2M' : relative_humidity, 'total_area_ha': total_area_ha,
- 'GWETROOT' : root_moisture, 'Type_crop' : crop, 'Type_fertilizer': fertilizer, 'Fertilizers_tonnes': fertilizer_tonnes ,'City' : city}
- input_df1 = pd.DataFrame([input_dict1])
-
- if st.button("Predict Cereal & Legumes"):
- output1 = predict1(model1=model1, input_df1=input_df1)
- output1 = 'Tons ' + "{:.2f}".format(output1)
-
- st.success('The output is {}'.format(output1))
-
- if add_selectbox == 'Fruits & Nuts':
-
- temperature_max = st.number_input('Temperature max (°C)', min_value= 20, max_value= 50)
-
- temperature_min = st.number_input('Temperature min (°C)', min_value= -5, max_value= 20)
-
- relative_humidity = st.number_input('Relative humidity (%)', min_value=0, max_value=100)
-
- root_moisture = st.number_input('Root moisture', max_value=1)
-
- total_area_ha = st.number_input('Total area(ha)', min_value=0, max_value=430)
-
- fertilizer_tonnes = st.number_input('Fertilizer (tonnes)', min_value=0, max_value=3477)
-
- fertilizer = st.selectbox('Type of fertilizer', ['calcium cyanamide', 'nitrogen-potassium', 'peaty-amend',
- 'organic-nitrogen', 'organic', 'ammonium sulphate',
- 'nitrogen-phosphorous', 'phosphorus-potassium', 'urea'])
-
- crop = st.selectbox('Type of crop', ['apple', 'apricot', 'cherry in complex', 'kiwi', 'nectarine',
- 'plum', 'hazelnut', 'pear', 'peach', 'almond'])
-
- city = st.selectbox('City', ['Agrigento', 'Alessandria', 'Ancona', 'Arezzo', 'Ascoli Piceno',
- 'Asti', 'Avellino', 'Bari', 'Belluno', 'Benevento', 'Bergamo',
- 'Biella', 'Bologna', 'Brescia', 'Brindisi', 'Caltanissetta',
- 'Campobasso', 'Caserta', 'Catania', 'Catanzaro', 'Chieti', 'Como',
- 'Cosenza', 'Cremona', 'Crotone', 'Enna', 'Ferrara', 'Firenze',
- 'Foggia', 'Frosinone', 'Genova', 'Gorizia', 'Grosseto', 'Imperia',
- 'Isernia', 'La Spezia', 'Latina', 'Lecce', 'Lecco', 'Livorno',
- 'Lodi', 'Lucca', 'Macerata', 'Mantova', 'Matera', 'Messina',
- 'Milano', 'Modena', 'Napoli', 'Novara', 'Nuoro', 'Oristano',
- 'Padova', 'Palermo', 'Parma', 'Pavia', 'Perugia',
- 'Pesaro e Urbino', 'Pescara', 'Piacenza', 'Pisa', 'Pistoia',
- 'Pordenone', 'Potenza', 'Prato', 'Ragusa', 'Ravenna',
- 'Reggio di Calabria', "Reggio nell'Emilia", 'Rieti', 'Rimini',
- 'Roma', 'Rovigo', 'Salerno', 'Sassari', 'Savona', 'Siena',
- 'Siracusa', 'Taranto', 'Teramo', 'Terni', 'Torino', 'Trapani',
- 'Treviso', 'Trieste', 'Udine', 'Varese', 'Venezia',
- 'Verbano-Cusio-Ossola', 'Vercelli', 'Verona', 'Vibo Valentia',
- 'Vicenza', 'Viterbo', 'Carbonia-Iglesias', 'Medio Campidano',
- 'Ogliastra', 'Olbia-Tempio', 'Barletta-Andria-Trani', 'Fermo',
- 'Monza e della Brianza'])
-
-
- output2=""
-
- input_dict2 = {'T2M_MAX': temperature_max, 'T2M_MIN':temperature_min,'RH2M' : relative_humidity, 'total_area_ha': total_area_ha,
- 'GWETROOT' : root_moisture, 'Type_crop' : crop, 'Type_fertilizer': fertilizer, 'Fertilizers_tonnes': fertilizer_tonnes ,'City' : city}
- input_df2 = pd.DataFrame([input_dict2])
-
- if st.button("Predict Fruits & Nuts"):
- output2 = predict2(model2=model2, input_df2=input_df2)
- output2 = 'Tons ' + "{:.2f}".format(output2)
-
- st.success('The output is {}'.format(output2))
-
-
- if add_selectbox == 'Grapes & Olives':
-
- temperature_max = st.number_input('Temperature max (°C)', min_value= 20, max_value= 50)
-
- temperature_min = st.number_input('Temperature min (°C)', min_value= -5, max_value= 20)
-
- relative_humidity = st.number_input('Relative humidity (%)', min_value=0, max_value=100)
-
- root_moisture = st.number_input('Root moisture', max_value=1)
-
- total_area_ha = st.number_input('Total area(ha)', min_value=0, max_value=5010)
-
- fertilizer_tonnes = st.number_input('Fertilizer (tonnes)', min_value=0, max_value=2852)
-
- fertilizer = st.selectbox('Type of fertilizer', ['calcium cyanamide', 'nitrogen-potassium', 'peaty-amend',
- 'organic-nitrogen', 'organic', 'ammonium sulphate',
- 'nitrogen-phosphorous', 'phosphorus-potassium', 'urea'])
-
- crop = st.selectbox('Type of crop', ['grapes-n.e.c', 'grapes-wines(N-pdo/pgi)', 'table olives',
- 'grapes-table', 'oil olives', 'other olives',
- 'grapes-wines(Y-pdo)', 'grapes-wines(Y-pgi)', 'grapes-raisins'])
-
- city = st.selectbox('City', ['Agrigento', 'Alessandria', 'Ancona', 'Arezzo', 'Ascoli Piceno',
- 'Asti', 'Avellino', 'Bari', 'Belluno', 'Benevento', 'Bergamo',
- 'Biella', 'Bologna', 'Brescia', 'Brindisi', 'Caltanissetta',
- 'Campobasso', 'Caserta', 'Catania', 'Catanzaro', 'Chieti',
- 'Cosenza', 'Cremona', 'Crotone', 'Enna', 'Ferrara', 'Firenze',
- 'Foggia', 'Frosinone', 'Genova', 'Grosseto', 'Imperia', 'Isernia',
- 'La Spezia', 'Latina', 'Lecce', 'Livorno', 'Lodi', 'Lucca',
- 'Macerata', 'Mantova', 'Matera', 'Messina', 'Milano', 'Modena',
- 'Napoli', 'Novara', 'Nuoro', 'Oristano', 'Padova', 'Palermo',
- 'Parma', 'Pavia', 'Perugia', 'Pesaro e Urbino', 'Pescara',
- 'Piacenza', 'Pisa', 'Pistoia', 'Pordenone', 'Potenza', 'Prato',
- 'Ragusa', 'Ravenna', 'Reggio di Calabria', "Reggio nell'Emilia",
- 'Rieti', 'Rimini', 'Roma', 'Rovigo', 'Salerno', 'Sassari',
- 'Savona', 'Siena', 'Siracusa', 'Taranto', 'Teramo', 'Terni',
- 'Torino', 'Trapani', 'Treviso', 'Trieste', 'Udine', 'Varese',
- 'Venezia', 'Verbano-Cusio-Ossola', 'Vercelli', 'Verona',
- 'Vibo Valentia', 'Vicenza', 'Viterbo', 'Carbonia-Iglesias',
- 'Medio Campidano', 'Ogliastra', 'Olbia-Tempio',
- 'Barletta-Andria-Trani', 'Fermo', 'Monza e della Brianza'])
-
-
- output3=""
-
- input_dict3 = {'T2M_MAX': temperature_max, 'T2M_MIN':temperature_min,'RH2M' : relative_humidity, 'total_area_ha': total_area_ha,
- 'GWETROOT' : root_moisture, 'Type_crop' : crop, 'Type_fertilizer': fertilizer, 'Fertilizers_tonnes': fertilizer_tonnes ,'City' : city}
- input_df3 = pd.DataFrame([input_dict3])
-
- if st.button("Predict Grapes & Olives"):
- output3 = predict3(model3=model3, input_df3=input_df3)
- output3 = 'Tons ' + "{:.2f}".format(output3)
-
- st.success('The output is {}'.format(output3))
-
-
-
- if add_selectbox == 'Fresh Vegetables':
-
- temperature_max = st.number_input('Temperature max (°C)', min_value= 20, max_value= 50)
-
- temperature_min = st.number_input('Temperature min (°C)', min_value= -5, max_value= 20)
-
- relative_humidity = st.number_input('Relative humidity (%)', min_value=0, max_value=100)
-
- root_moisture = st.number_input('Root moisture', max_value=1)
-
- total_area_ha = st.number_input('Total area(ha)', min_value=0, max_value=431)
-
- fertilizer_tonnes = st.number_input('Fertilizer (tonnes)', min_value=0, max_value=3473)
-
- fertilizer = st.selectbox('Type of fertilizer', ['calcium cyanamide', 'nitrogen-potassium', 'peaty-amend',
- 'organic-nitrogen', 'organic', 'ammonium sulphate',
- 'nitrogen-phosphorous', 'phosphorus-potassium', 'urea'])
-
- crop = st.selectbox('Type of crop', ['cauliflower&broccoli-field', 'courgette-field', 'egg-plant-field',
- 'fresh-beans-field', 'lettuce-field', 'onions-field',
- 'red-pepper-field', 'chicory-field', 'melon-field', 'fresh-tomato'])
-
- city = st.selectbox('City', ['Agrigento', 'Alessandria', 'Ancona', 'Arezzo', 'Ascoli Piceno',
- 'Asti', 'Avellino', 'Bari', 'Belluno', 'Benevento', 'Bergamo',
- 'Biella', 'Bologna', 'Brescia', 'Brindisi', 'Caltanissetta',
- 'Campobasso', 'Caserta', 'Catania', 'Catanzaro', 'Chieti',
- 'Cosenza', 'Cremona', 'Crotone', 'Enna', 'Ferrara', 'Firenze',
- 'Foggia', 'Frosinone', 'Genova', 'Gorizia', 'Grosseto', 'Imperia',
- 'Isernia', 'La Spezia', 'Latina', 'Lecce', 'Livorno', 'Lodi',
- 'Lucca', 'Macerata', 'Mantova', 'Matera', 'Messina', 'Milano',
- 'Modena', 'Napoli', 'Novara', 'Nuoro', 'Oristano', 'Padova',
- 'Palermo', 'Parma', 'Pavia', 'Perugia', 'Pesaro e Urbino',
- 'Pescara', 'Piacenza', 'Pisa', 'Pistoia', 'Pordenone', 'Potenza',
- 'Prato', 'Ragusa', 'Ravenna', 'Reggio di Calabria',
- "Reggio nell'Emilia", 'Rimini', 'Roma', 'Rovigo', 'Salerno',
- 'Sassari', 'Savona', 'Siena', 'Siracusa', 'Taranto', 'Teramo',
- 'Terni', 'Torino', 'Trapani', 'Treviso', 'Trieste', 'Udine',
- 'Varese', 'Venezia', 'Verbano-Cusio-Ossola', 'Vercelli', 'Verona',
- 'Vibo Valentia', 'Vicenza', 'Viterbo', 'Carbonia-Iglesias',
- 'Medio Campidano', 'Ogliastra', 'Olbia-Tempio', 'Barletta-Andria-Trani',
- 'Fermo', 'Monza e della Brianza'])
-
-
- output4=""
-
- input_dict4 = {'T2M_MAX': temperature_max, 'T2M_MIN':temperature_min,'RH2M' : relative_humidity, 'total_area_ha': total_area_ha,
- 'GWETROOT' : root_moisture, 'Type_crop' : crop, 'Type_fertilizer': fertilizer, 'Fertilizers_tonnes': fertilizer_tonnes ,'City' : city}
- input_df4 = pd.DataFrame([input_dict4])
-
- if st.button("Predict Fresh Vegetables"):
- output4 = predict4(model4=model4, input_df4=input_df4)
- output4 = 'Tons ' + "{:.2f}".format(output4)
-
- st.success('The output is {}'.format(output4))
-
-
-
- if add_selectbox == 'Industrial crops':
-
- temperature_max = st.number_input('Temperature max (°C)', min_value= 20, max_value= 50)
-
- temperature_min = st.number_input('Temperature min (°C)', min_value= -5, max_value= 20)
-
- relative_humidity = st.number_input('Relative humidity (%)', min_value=0, max_value=100)
-
- root_moisture = st.number_input('Root moisture', max_value=1)
-
- total_area_ha = st.number_input('Total area(ha)', min_value=0, max_value=1440)
-
- fertilizer_tonnes = st.number_input('Fertilizer (tonnes)', min_value=0, max_value=4824)
-
- fertilizer = st.selectbox('Type of fertilizer', ['calcium cyanamide', 'nitrogen-potassium', 'peaty-amend',
- 'organic-nitrogen', 'organic', 'ammonium sulphate',
- 'nitrogen-phosphorous', 'phosphorus-potassium', 'urea'])
-
- crop = st.selectbox('Type of crop', ['hemp', 'rape', 'soya beans', 'tobacco', 'flax', 'parsley-field',
- 'sunflower'])
-
- city = st.selectbox('City', ['Alessandria', 'Ancona', 'Arezzo', 'Ascoli Piceno', 'Asti',
- 'Avellino', 'Bari', 'Belluno', 'Benevento', 'Bergamo', 'Biella',
- 'Bologna', 'Brescia', 'Caltanissetta', 'Campobasso', 'Caserta',
- 'Catania', 'Catanzaro', 'Chieti', 'Como', 'Cosenza', 'Cremona',
- 'Crotone', 'Ferrara', 'Firenze', 'Foggia', 'Frosinone', 'Genova',
- 'Gorizia', 'Grosseto', 'Imperia', 'Isernia', 'Latina', 'Lecco',
- 'Livorno', 'Lodi', 'Lucca', 'Macerata', 'Mantova', 'Matera',
- 'Milano', 'Modena', 'Napoli', 'Novara', 'Nuoro', 'Oristano',
- 'Padova', 'Parma', 'Pavia', 'Perugia', 'Pescara', 'Piacenza',
- 'Pisa', 'Pistoia', 'Pordenone', 'Potenza', 'Prato', 'Ravenna',
- "Reggio nell'Emilia", 'Rieti', 'Rimini', 'Roma', 'Rovigo',
- 'Salerno', 'Sassari', 'Savona', 'Siena', 'Taranto', 'Teramo',
- 'Terni', 'Torino', 'Treviso', 'Trieste', 'Udine', 'Varese',
- 'Venezia', 'Verbano-Cusio-Ossola', 'Vercelli', 'Verona', 'Vicenza',
- 'Viterbo', 'Carbonia-Iglesias', 'Medio Campidano', 'Ogliastra',
- 'Vibo Valentia', 'Barletta-Andria-Trani', 'Fermo',
- 'Monza e della Brianza', 'La Spezia'])
-
-
- output5=""
-
- input_dict5 = {'T2M_MAX': temperature_max, 'T2M_MIN':temperature_min,'RH2M' : relative_humidity, 'total_area_ha': total_area_ha,
- 'GWETROOT' : root_moisture, 'Type_crop' : crop, 'Type_fertilizer': fertilizer, 'Fertilizers_tonnes': fertilizer_tonnes ,'City' : city}
- input_df5 = pd.DataFrame([input_dict5])
-
- if st.button("Predict Industrial crops"):
- output5 = predict5(model5=model5, input_df5=input_df5)
- output5 = 'Tons ' + "{:.2f}".format(output5)
-
- st.success('The output is {}'.format(output5))
-
-if __name__ == '__main__':
-
- run()
-
diff --git a/spaces/OpenGVLab/InternGPT/iGPT/models/inpainting_src/ldm_inpainting/ldm/__init__.py b/spaces/OpenGVLab/InternGPT/iGPT/models/inpainting_src/ldm_inpainting/ldm/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmseg/ops/wrappers.py b/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmseg/ops/wrappers.py
deleted file mode 100644
index 0ed9a0cb8d7c0e0ec2748dd89c652756653cac78..0000000000000000000000000000000000000000
--- a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmseg/ops/wrappers.py
+++ /dev/null
@@ -1,50 +0,0 @@
-import warnings
-
-import torch.nn as nn
-import torch.nn.functional as F
-
-
-def resize(input,
- size=None,
- scale_factor=None,
- mode='nearest',
- align_corners=None,
- warning=True):
- if warning:
- if size is not None and align_corners:
- input_h, input_w = tuple(int(x) for x in input.shape[2:])
- output_h, output_w = tuple(int(x) for x in size)
- if output_h > input_h or output_w > output_h:
- if ((output_h > 1 and output_w > 1 and input_h > 1
- and input_w > 1) and (output_h - 1) % (input_h - 1)
- and (output_w - 1) % (input_w - 1)):
- warnings.warn(
- f'When align_corners={align_corners}, '
- 'the output would more aligned if '
- f'input size {(input_h, input_w)} is `x+1` and '
- f'out size {(output_h, output_w)} is `nx+1`')
- return F.interpolate(input, size, scale_factor, mode, align_corners)
-
-
-class Upsample(nn.Module):
-
- def __init__(self,
- size=None,
- scale_factor=None,
- mode='nearest',
- align_corners=None):
- super(Upsample, self).__init__()
- self.size = size
- if isinstance(scale_factor, tuple):
- self.scale_factor = tuple(float(factor) for factor in scale_factor)
- else:
- self.scale_factor = float(scale_factor) if scale_factor else None
- self.mode = mode
- self.align_corners = align_corners
-
- def forward(self, x):
- if not self.size:
- size = [int(t * self.scale_factor) for t in x.shape[-2:]]
- else:
- size = self.size
- return resize(x, size, None, self.mode, self.align_corners)
diff --git a/spaces/PKUWilliamYang/StyleGANEX/models/stylegan2/__init__.py b/spaces/PKUWilliamYang/StyleGANEX/models/stylegan2/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/PKUWilliamYang/VToonify/vtoonify/model/stylegan/op/fused_act.py b/spaces/PKUWilliamYang/VToonify/vtoonify/model/stylegan/op/fused_act.py
deleted file mode 100644
index 74815adafbf7a37d5d4def41ac60dbdeefdbff30..0000000000000000000000000000000000000000
--- a/spaces/PKUWilliamYang/VToonify/vtoonify/model/stylegan/op/fused_act.py
+++ /dev/null
@@ -1,34 +0,0 @@
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-
-class FusedLeakyReLU(nn.Module):
- def __init__(self, channel, bias=True, negative_slope=0.2, scale=2 ** 0.5):
- super().__init__()
-
- if bias:
- self.bias = nn.Parameter(torch.zeros(channel))
-
- else:
- self.bias = None
-
- self.negative_slope = negative_slope
- self.scale = scale
-
- def forward(self, inputs):
- return fused_leaky_relu(inputs, self.bias, self.negative_slope, self.scale)
-
-
-def fused_leaky_relu(inputs, bias=None, negative_slope=0.2, scale=2 ** 0.5):
- if bias is not None:
- rest_dim = [1] * (inputs.ndim - bias.ndim - 1)
- return (
- F.leaky_relu(
- inputs + bias.view(1, bias.shape[0], *rest_dim), negative_slope=negative_slope
- )
- * scale
- )
-
- else:
- return F.leaky_relu(inputs, negative_slope=negative_slope) * scale
\ No newline at end of file
diff --git a/spaces/Paatiii1712/stock_market_forcasting/app.py b/spaces/Paatiii1712/stock_market_forcasting/app.py
deleted file mode 100644
index 2e0ee2fcfc1888217d359f62a3d6db3f4c119f35..0000000000000000000000000000000000000000
--- a/spaces/Paatiii1712/stock_market_forcasting/app.py
+++ /dev/null
@@ -1,515 +0,0 @@
-# -*- coding: utf-8 -*-
-"""
-Created on Mon Jul 25 21:06:22 2022
-
-@author: ayush
-"""
-
-# Import the Libraries
-import streamlit as st
-import pandas as pd
-import numpy as np
-import plotly.express as px
-import plotly.graph_objects as go
-
-import yfinance as yf
-import datetime as dttm
-
-st.set_page_config(page_title='Stock Market Forecasting', page_icon='stock2.jpg')
-st.image('stock2.jpg','Stock Market Forecasting')
-st.header('Stock Market Forecasting')
-st.write('---')
-st.sidebar.header('User Input Parameters')
-
-ipt = st.sidebar.selectbox( label ="Take Input DataSet From",
- options=['Yfinance'])
-if ipt == 'Yfinance':
- st.sidebar.write('**1).Stock Name**')
- stick_name = st.sidebar.text_input('Enter the Name of Stock as in yfinance',
- value= 'NESTLEIND.NS'
- )
- st.sidebar.write('**2).Starting Date and Ending Date used for Training Data**')
- startDate = st.sidebar.date_input('Starting Date',
- value= dttm.date(2011, 1, 1),
- min_value = dttm.date(2010,1,1)
- )
- endDate = st.sidebar.date_input('Ending Date',
- value= dttm.date(2022, 7, 1),
- min_value = dttm.date(2010,1,1)
- )
- n = st.sidebar.number_input("Approx Number of Day's you want to Forecaste",
- min_value = 1,
- max_value = 30,
- value = 10)
-else:
- st.error('You Select a wrong option')
-
-if 'start' not in st.session_state:
- st.session_state['start'] = False
-
-start = st.sidebar.checkbox('check to start', value = st.session_state['start'])
-
-
-if start:
- st.session_state['start'] = True
-
-else:
- st.session_state['start'] = False
-
-if 'stock_past' not in st.session_state:
- st.session_state['stock_past'] = None
-result = st.sidebar.button('Clear the Session State')
-if result:
- for key in st.session_state.keys():
- del st.session_state[key]
-
-if 'start' not in st.session_state:
- st.session_state['start'] = False
-
-if st.session_state['start']==True:
- # Extract Dataset from yfinance
- if ipt == 'Yfinance':
-
- GetData = yf.Ticker(stick_name)
-
- yf_data = pd.DataFrame(GetData.history(start=startDate, end=endDate))
-
- st.subheader('Input DataFrame')
- st.write(stick_name,'*Stock DataFrame*')
- st.dataframe(yf_data)
-
- if yf_data.empty:
- st.error('No Internet Connection')
- yf_data = None
-
-
- if 'stock_present' not in st.session_state:
- st.session_state['stock_present'] = stick_name
-
- st.session_state['stock_present'] = stick_name
-
-
- st.subheader('Visualisation')
-
- # Different types of plots
- st.sidebar.header('Visualisation')
- #Visualisation
- chart_select = st.sidebar.selectbox(
- label ="Type of chart",
- options=['Lineplots','Scatterplots','Histogram']
- )
-
-
- numeric_columns = list(yf_data.select_dtypes(['float','int']).columns)
- numeric_columns.sort()
-
- if chart_select == 'Scatterplots':
- st.sidebar.subheader('Scatterplot Settings')
- try:
- y_values = st.sidebar.selectbox('Y axis',options=numeric_columns)
-
- plot = px.scatter(data_frame=yf_data,y=y_values,
- title=str('Scatter Plot for '+y_values+' column'))
- st.write(plot)
- except Exception as e:
- print(e)
- if chart_select == 'Histogram':
- st.sidebar.subheader('Histogram Settings')
- try:
- x_values = st.sidebar.selectbox('X axis',options=numeric_columns)
- plot = px.histogram(data_frame=yf_data,x=x_values,marginal="box",
- title=str('Histogram Plot for '+y_values+' column'))
- st.write(plot)
- except Exception as e:
- print(e)
- if chart_select == 'Lineplots':
- st.sidebar.subheader('Lineplots Settings')
- try:
- y_values = st.sidebar.selectbox('Y axis',options=numeric_columns)
- plot = px.line(yf_data,y=y_values,
- title=str('Line Plot for '+y_values+' column'))
- st.write(plot)
- except Exception as e:
- print(e)
-
-
- # Final Dataset for Model Building i.e. selecting only "close" column
- #st.write(numeric_columns)
- if "Close" in numeric_columns:
- final_data = pd.DataFrame(yf_data.Close)
- final_data = final_data.sort_index(ascending=True)
- final_data.rename(columns={'Close': 'Close'},inplace = True)
-
- elif "close" in numeric_columns:
- final_data = pd.DataFrame(yf_data.close)
- final_data = final_data.sort_index(ascending=True)
- final_data.rename(columns={'close': 'Close'},inplace = True)
-
- elif "CLOSE" in numeric_columns:
- final_data = pd.DataFrame(yf_data.CLOSE)
- final_data = final_data.sort_index(ascending=True)
- final_data.rename(columns={'CLOSE': 'Close'},inplace = True)
-
- else:
- final_data = None
- st.subheader('Close Column is not Present in the File, Please Check the file and reupload')
-
- st.subheader('DataSet used for Training')
- st.write(final_data)
-
- try:
- # Setting Frequency of Close column
- training_data = final_data.copy()
- training_data = training_data.asfreq('B')
- training_data.ffill(inplace=True)
- #st.write(data)
- #st.write(data.shape)
- #st.write(data.isnull().sum())
-
- except:
- training_data = None
- st.error('Please Check the settings, You have not choose the appropriate option or You have not upload the File or not in write formate')
-
- # Error function
- from sklearn.metrics import mean_squared_error
- from sklearn.metrics import mean_absolute_error
-
- # Data Transformation=========================================================================================================================
- from sklearn.preprocessing import MinMaxScaler
- scaler = MinMaxScaler(feature_range=(0.001,1))
- full_data_minmax = scaler.fit_transform(np.array(training_data).reshape(-1,1))
- #st.write(train_data_minmax)
- #st.write(train_data.index)
-
- full_data_minmax = pd.DataFrame(full_data_minmax, columns = ['close'])
- full_data_minmax.index = training_data.index
- #st.write(full_data_minmax)
-
-
- # Spliting the DataSet into Train and Test
- train_data = training_data[:int(len(final_data)*0.8)]
- test_data = training_data[int(len(final_data)*0.8):]
-
- train_data_minmax = scaler.fit_transform(np.array(train_data).reshape(-1,1))
- train_data_minmax = pd.DataFrame(train_data_minmax, columns = ['close'])
- train_data_minmax.index = train_data.index
-
-
- #Model Building===============================================================================================================================
-
- from sktime.forecasting.compose import AutoEnsembleForecaster
- from sktime.forecasting.exp_smoothing import ExponentialSmoothing
- from sktime.forecasting.fbprophet import Prophet
- import holidays
- import random
-
- # Prophet Model-------------------------------------------------------------------------------------------------------------------------------
-
- # Holiday
- holiday = pd.DataFrame([])
-
- for date, name in sorted(holidays.India(years=[2011,2012,2013,2014,2015,2016,2017,2018,2019,2020,2021,2022]).items()):
- holiday = holiday.append(pd.DataFrame({'ds': date, 'holiday': "India_Holidays"}, index=[0]), ignore_index=True)
- holiday['ds'] = pd.to_datetime(holiday['ds'], format='%Y-%m-%d', errors='ignore')
-
- # HyperParameter Tunning
- if st.session_state['stock_present']!= st.session_state['stock_past']:
-
- st.write('**Hypreparameter Tunning is Done only once for every stock, So be patient**')
- st.write('**Hypreparameter Tunning is Started for Prophet Model**')
- from sklearn.model_selection import ParameterGrid
- params_grid = {'changepoint_prior_scale':[10,25],
- 'n_changepoints' : [10,25],
- 'seasonality_prior_scale':[0.05,1]}
- Pro_model_parameters = pd.DataFrame(columns = ['Parameters','MSE','RMSE'])
-
- grid = ParameterGrid(params_grid)
-
- Pro_bar = st.progress(0)
-
- i = 1
- for p in grid:
- test = pd.DataFrame()
- # print(i,' ',p)
- random.seed(0)
- train_model =Prophet(freq='B',
- changepoint_prior_scale = p['changepoint_prior_scale'],
- n_changepoints = p['n_changepoints'],
- seasonality_mode = 'multiplicative',
- seasonality_prior_scale=p['seasonality_prior_scale'],
- weekly_seasonality=False,
- daily_seasonality = False,
- yearly_seasonality = True,
- add_country_holidays={'country_name': 'India'},
- holidays=holiday)
- train_model.fit(train_data_minmax)
- fh = list(range(1,int(len(test_data))+1))
- test_predictions = train_model.predict(fh=fh)
- test_predictions=scaler.inverse_transform(test_predictions)
- mse = mean_squared_error(test_data, test_predictions)
- rmse = np.sqrt(mse)
- Pro_bar.progress(i/len(grid))
- i = i+1
- #print('Root Mean Squre Error(RMSE)------------------------------------',rmse)
- Pro_model_parameters = Pro_model_parameters.append({'Parameters':p, 'MSE':mse, 'RMSE':rmse},ignore_index=True)
-
-
- Pro_parameters = Pro_model_parameters.sort_values(by=['RMSE'])
- Pro_parameters = Pro_parameters.reset_index(drop=True)
- #st.write(Pro_parameters)
- st.write('**Hypreparameter Tunning is Done for Prophet Model**')
-
- if 'changepoint_prior_scale' not in st.session_state:
- st.session_state['changepoint_prior_scale'] = Pro_parameters['Parameters'][0]['changepoint_prior_scale']
-
- else:
- pass
- st.session_state['changepoint_prior_scale'] = Pro_parameters['Parameters'][0]['changepoint_prior_scale']
-
- if 'n_changepoints' not in st.session_state:
- st.session_state['n_changepoints'] = Pro_parameters['Parameters'][0]['n_changepoints']
-
- else:
- pass
- st.session_state['n_changepoints'] = Pro_parameters['Parameters'][0]['n_changepoints']
-
- if 'seasonality_prior_scale' not in st.session_state:
- st.session_state['seasonality_prior_scale'] = Pro_parameters['Parameters'][0]['seasonality_prior_scale']
-
- else:
- pass
- st.session_state['seasonality_prior_scale'] = Pro_parameters['Parameters'][0]['seasonality_prior_scale']
-
- else:
- pass
-
- Pro_model = Prophet(freq='B', seasonality_mode='multiplicative',
- changepoint_prior_scale=st.session_state['changepoint_prior_scale'],
- n_changepoints=st.session_state['n_changepoints'],
- seasonality_prior_scale=st.session_state['seasonality_prior_scale'],
- add_country_holidays={'country_name': 'India'}, verbose=10,
- holidays=holiday,
- yearly_seasonality=True, weekly_seasonality=False , daily_seasonality=False)
- #Pro_model.fit(train_data_minmax)
-
- #fh = list(range(1,int(len(test_data_minmax))+1))
- # fh1 = pd.DatetimeIndex(np.array(test_data.index))
- # fh1
- #test_predictions_minmax = Pro_model.predict(fh=fh)
- #st.write(test_predictions_minmax)
-
- #test_predictions=scaler.inverse_transform(test_predictions_minmax)
- #test_predictions = pd.DataFrame(test_predictions, columns = ['Close'])
- #test_predictions.index = test_data.index
- #st.write(test_predictions)
-
-
-
- # Exponential Smoothing Model-----------------------------------------------------------------------------------------------------------------
-
-
- # HyperParameter Tunning
- if st.session_state['stock_present']!= st.session_state['stock_past']:
-
- st.write('**Hypreparameter Tunning is Started for Exponential Smoothing Model**')
- from sklearn.model_selection import ParameterGrid
- params_grid = {'trend':["add", "mul"],
- 'seasonal' : ["add", "mul"]
- }
- Expo_model_parameters = pd.DataFrame(columns = ['Parameters','MSE','RMSE'])
-
- grid = ParameterGrid(params_grid)
-
- Expo_bar = st.progress(0)
- i = 1
- for p in grid:
- test = pd.DataFrame()
- # print(i,' ',p)
- random.seed(0)
- train_model = ExponentialSmoothing(trend=p['trend'],
- seasonal=p['seasonal'],
- sp=262,
- damped_trend=False)
- train_model.fit(train_data_minmax)
- fh = list(range(1,int(len(test_data))+1))
- test_predictions = train_model.predict(fh=fh)
- test_predictions=scaler.inverse_transform(test_predictions)
- mse = mean_squared_error(test_data, test_predictions)
- rmse = np.sqrt(mse)
- Expo_bar.progress(i/len(grid))
- i = i+1
- # print('Root Mean Squre Error(RMSE)------------------------------------',rmse)
- Expo_model_parameters = Expo_model_parameters.append({'Parameters':p, 'MSE':mse, 'RMSE':rmse},ignore_index=True)
-
-
- Expo_parameters = Expo_model_parameters.sort_values(by=['RMSE'])
- Expo_parameters = Expo_parameters.reset_index(drop=True)
- #st.write(Expo_parameters)
- st.write('**Hypreparameter Tunning is Done for Exponential Smoothing Model**')
-
- if 'trend' not in st.session_state:
- st.session_state['trend'] = Expo_parameters['Parameters'][0]['trend']
-
- else:
- pass
- st.session_state['trend'] = Expo_parameters['Parameters'][0]['trend']
-
- if 'seasonal' not in st.session_state:
- st.session_state['seasonal'] = Expo_parameters['Parameters'][0]['seasonal']
-
- else:
- pass
- st.session_state['seasonal'] = Expo_parameters['Parameters'][0]['seasonal']
-
- else:
- pass
-
- Expo_model = ExponentialSmoothing(trend=st.session_state['trend'],
- seasonal=st.session_state['seasonal'],
- sp=262,
- damped_trend=False)
- #Expo_model.fit(train_data_minmax)
-
- #fh = list(range(1,int(len(test_data_minmax))+1))
- # fh1 = pd.DatetimeIndex(np.array(test_data.index))
- # fh1
- #test_predictions_minmax = Expo_model.predict(fh=fh)
- #st.write(test_predictions_minmax)
-
- #test_predictions=scaler.inverse_transform(test_predictions_minmax)
- #test_predictions = pd.DataFrame(test_predictions, columns = ['Close'])
- #test_predictions.index = test_data.index
- #st.write(test_predictions)
-
-
- # AutoEnsembleForecaster Model----------------------------------------------------------------------------------------------------------------
-
- st.subheader('Model Building')
- st.write('**Validating the final model**')
- forecasters = [
- ("prophet" , Pro_model),
- ("expo" , Expo_model)
- ]
-
- Ensmodel = AutoEnsembleForecaster(forecasters=forecasters, n_jobs=-1, random_state=42)
- Ensmodel.fit(train_data_minmax)
-
-
- fh = list(range(1,int(len(test_data))+1))
- # fh1 = pd.DatetimeIndex(np.array(test_data.index))
- # fh1
- test_predictionsEns = Ensmodel.predict(fh=fh)
- #st.write(test_predictionsEns)
-
-
- test_predictions=scaler.inverse_transform(test_predictionsEns)
- test_predictions = pd.DataFrame(test_predictions, columns = ['Close'])
- test_predictions.index = test_data.index
- #st.write(test_predictions)
-
- mse = mean_squared_error(test_data, test_predictions)
- rmse = np.sqrt(mse)
- mae = mean_absolute_error(test_data, test_predictions)
- mape = np.mean(np.abs((test_data-test_predictions)/test_data))*100
- errors = {'MSE':mse, 'RMSE':rmse, 'MAE':mae, 'MAPE':mape}
- errors_df = pd.DataFrame(errors)
-
-
- fig = go.Figure()
-
- fig.add_trace(go.Scatter(x=train_data.index, y=train_data['Close'], mode='lines', name='TRAIN'))
- fig.add_trace(go.Scatter(x=test_data.index, y=test_data['Close'], mode='lines', name='TEST'))
- fig.add_trace(go.Scatter(x=test_predictions.index, y=test_predictions['Close'], mode='lines', name='PREDICTION'))
-
- fig.update_layout(title_text='Forecast vs Actuals', title_x=0.5)
-
- st.plotly_chart(fig)
-
- st.write(errors_df)
-
- st.write('**If you are satisfied with the Validation then Start the Forecast Or Reset the Session State and re-run the app for Hyperparameter Tunning**')
-
-
- numdays = st.number_input("Number of Day's you want to Forecaste",
- min_value = 1,
- max_value = n*3,
- value = n)
-
- if 'fr' not in st.session_state:
- st.session_state['fr'] = 0
-
- if st.session_state['stock_present']!= st.session_state['stock_past']:
- st.session_state['stock_past'] = st.session_state['stock_present']
- st.session_state['fr'] = 0
-
- frct = st.selectbox('Start the Forecast', options = ['No', 'Yes'],
- index = st.session_state['fr'])
-
- if frct == 'Yes':
- st.subheader('Forecasting')
- st.session_state['fr'] = 1
- forecasters = [
- ("prophet" , Pro_model),
- ("expo" , Expo_model)
- ]
-
- st.write('Training the model')
- Ensmodel = AutoEnsembleForecaster(forecasters=forecasters, n_jobs=-1, random_state=42)
- Ensmodel.fit(full_data_minmax)
-
- st.write('Forecasting from trained model')
- prediction_list = [(pd.to_datetime(endDate) + dttm.timedelta(days=x)).date() for x in range(0,numdays)]
- prediction_list = pd.to_datetime(prediction_list)
- forecaste = pd.DataFrame(prediction_list, columns=['Date'])
- #st.write(forecaste)
- for_df = forecaste.set_index('Date')
- #st.write(for_df)
- for_df = for_df.asfreq('B')
- #n = int(len(for_df.index))
- #st.write(n)
-
-
- #fh = list(range(1,n+1))
- fh1 = pd.DatetimeIndex(np.array(for_df.index))
- # fh1
- final_predictions = Ensmodel.predict(fh=fh1)
- #st.write(final_predictions)
-
-
- final_predictions=scaler.inverse_transform(final_predictions)
- #st.write(final_predictions)
- for_df['Close'] = final_predictions
- st.markdown('### Forecast DataSet')
- st.write(for_df)
-
-
-
- fig = go.Figure()
-
- fig.add_trace(go.Scatter(x=training_data.index, y=training_data['Close'], mode='lines', name='TRAIN'))
- fig.add_trace(go.Scatter(x=for_df.index, y=for_df['Close'], mode='lines', name='Forecast'))
- fig.update_layout(title_text='Final Forecast', title_x=0.5)
-
- st.write(fig)
-
-
- elif frct == 'No':
- st.session_state['fr'] = 0
-
- else:
- pass
-
-
-
- if st.sidebar.button('Made By'):
- name = ['Ayush Patidar', 'Aditya Rao', 'Farzan Nawaz',
- 'Nikhil Hosamani', 'Lakshmi Supriya', 'Bhavitha Mitte', 'Aadarsh Asthana']
- gmail = ['ayushpatidar1712@gmail.com', 'adityarao0909@gmail.com', 'farzannawaz4787@gmail.com',
- 'nikhilhosamani7777@gmail.com', 'karrilakshmisupriya@gmail.com', 'bhavithamitte292@gmail.com',
- 'aadarshasthana2017@gmail.com']
- dt = {'Name':name, 'Contact Detail': gmail}
- made = pd.DataFrame(dt)
- st.write(made)
-
-else:
- pass
\ No newline at end of file
diff --git a/spaces/Paco1112/Super-writing-tool/README.md b/spaces/Paco1112/Super-writing-tool/README.md
deleted file mode 100644
index 37081edb6b3ee113c2ac76fb73466011517ebf42..0000000000000000000000000000000000000000
--- a/spaces/Paco1112/Super-writing-tool/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Pornhub
-emoji: 🚀
-colorFrom: gray
-colorTo: pink
-sdk: gradio
-sdk_version: 3.19.1
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/ice-9/scm-style-repl.go b/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/ice-9/scm-style-repl.go
deleted file mode 100644
index 17b188a815fc78cecb5e5346a68f4a4a39578f2b..0000000000000000000000000000000000000000
Binary files a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/ice-9/scm-style-repl.go and /dev/null differ
diff --git a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/language/tree-il/debug.go b/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/language/tree-il/debug.go
deleted file mode 100644
index 4ddb5ddadea404bea8690cbbba1c29707eca15bc..0000000000000000000000000000000000000000
Binary files a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/language/tree-il/debug.go and /dev/null differ
diff --git a/spaces/Pattr/DrumClassification/lilypond-2.24.2/share/lilypond/2.24.2/python/musicexp.py b/spaces/Pattr/DrumClassification/lilypond-2.24.2/share/lilypond/2.24.2/python/musicexp.py
deleted file mode 100644
index faaf37cc799b9f00cbe0e71046be7b4e8f93c030..0000000000000000000000000000000000000000
--- a/spaces/Pattr/DrumClassification/lilypond-2.24.2/share/lilypond/2.24.2/python/musicexp.py
+++ /dev/null
@@ -1,2781 +0,0 @@
-# musicexp.py
-# -*- coding: utf-8 -*-
-#
-# This file is part of LilyPond, the GNU music typesetter.
-#
-# Copyright (C) 2005--2022 Han-Wen Nienhuys ,
-# 2007--2011 Reinhold Kainhofer
-#
-# LilyPond is free software: you can redistribute it and/or modify
-# it under the terms of the GNU General Public License as published by
-# the Free Software Foundation, either version 3 of the License, or
-# (at your option) any later version.
-#
-# LilyPond is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-# GNU General Public License for more details.
-#
-# You should have received a copy of the GNU General Public License
-# along with LilyPond. If not, see .
-
-
-from fractions import Fraction
-import inspect
-import math
-import re
-import sys
-import utilities
-import warnings
-
-import lilylib as ly
-
-# Store previously converted pitch for \relative conversion as a global state variable
-previous_pitch = None
-relative_pitches = False
-whatOrnament = ""
-ly_dur = None # stores lilypond durations
-
-
-def escape_instrument_string(input_string):
- retstring = input_string.replace("\"", "\\\"")
- if re.match('.*[\r\n]+.*', retstring):
- rx = re.compile(r'[\n\r]+')
- strings = rx.split(retstring)
- retstring = "\\markup { \\center-column { "
- for s in strings:
- retstring += "\\line {\"" + s + "\"} "
- retstring += "} }"
- else:
- retstring = "\"" + retstring + "\""
- return retstring
-
-
-class Output_stack_element:
- def __init__(self):
- self.factor = Fraction(1)
-
- def copy(self):
- o = Output_stack_element()
- o.factor = self.factor
- return o
-
-
-class Output_printer(object):
- """
- A class that takes care of formatting (eg.: indenting) a
- Music expression as a .ly file.
- """
-
- def __init__(self):
- self._line = ''
- self._indent = 4
- self._nesting = 0
- self._file = sys.stdout
- self._line_len = 72
- self._output_state_stack = [Output_stack_element()]
- self._skipspace = False
- self._last_duration = None
-
- def set_file(self, file):
- self._file = file
-
- def dump_version(self, version):
- self.print_verbatim('\\version "' + version + '"')
- self.newline()
-
- def get_indent(self):
- return self._nesting * self._indent
-
- def override(self):
- last = self._output_state_stack[-1]
- self._output_state_stack.append(last.copy())
-
- def add_factor(self, factor):
- self.override()
- self._output_state_stack[-1].factor *= factor
-
- def revert(self):
- del self._output_state_stack[-1]
- if not self._output_state_stack:
- raise RuntimeError('empty stack')
-
- def duration_factor(self):
- return self._output_state_stack[-1].factor
-
- def print_verbatim(self, s):
- self._line += s
-
- def unformatted_output(self, s):
- # don't indent on \< and indent only once on <<
- self._nesting += (s.count('<')
- - s.count(r'\<') - s.count('<<')
- + s.count('{'))
- self._nesting -= (s.count('>') - s.count(r'\>') - s.count('>>')
- - s.count('->') - s.count('_>')
- - s.count('^>')
- + s.count('}'))
- self.print_verbatim(s)
-
- def print_duration_string(self, s):
- if self._last_duration == s:
- return
-
- self.unformatted_output(s)
-
-# def print_note_color (self, object, rgb=None):
-# if rgb:
-# str = ("\\once\\override %s.color = #(rgb-color %s # %s %s)" % (object, rgb[0], rgb[1], rgb[2]))
-# else:
-# str = "\\revert %s.color" % object
-# self.newline()
-# self.add_word(str)
-# self.newline()
-
- def add_word(self, s):
- if len(s) + 1 + len(self._line) > self._line_len:
- self.newline()
- self._skipspace = True
-
- if not self._skipspace:
- self._line += ' '
- self.unformatted_output(s)
- self._skipspace = False
-
- def newline(self):
- self._file.write(self._line + '\n')
- self._line = ' ' * self._indent * self._nesting
- self._skipspace = True
-
- def skipspace(self):
- self._skipspace = True
-
- def __call__(self, arg):
- self.dump(arg)
-
- def dump(self, s):
- if self._skipspace:
- self._skipspace = False
- self.unformatted_output(s)
- else:
- # Avoid splitting quoted strings (e.g. "1. Wie") when indenting.
- words = utilities.split_string_and_preserve_doublequoted_substrings(
- s)
- for w in words:
- self.add_word(w)
-
- def close(self):
- self.newline()
- self._file.close()
- self._file = None
-
-
-class Duration:
- def __init__(self):
- self.duration_log = 0
- self.dots = 0
- self.factor = Fraction(1)
-
- def lisp_expression(self):
- return '(ly:make-duration %d %d %d %d)' % (self.duration_log,
- self.dots,
- self.factor.numerator,
- self.factor.denominator)
-
- def ly_expression(self, factor=None, scheme_mode=False):
- global ly_dur # stores lilypond durations
- if not factor:
- factor = self.factor
-
- if self.duration_log < 0:
- if scheme_mode:
- longer_dict = {-1: "breve", -2: "longa"}
- else:
- longer_dict = {-1: "\\breve", -2: "\\longa"}
- dur_str = longer_dict.get(self.duration_log, "1")
- else:
- dur_str = '%d' % (1 << self.duration_log)
- dur_str += '.' * self.dots
-
- if factor != Fraction(1, 1):
- if factor.denominator != 1:
- dur_str += '*%d/%d' % (factor.numerator, factor.denominator)
- else:
- dur_str += '*%d' % factor.numerator
-
- if dur_str.isdigit():
- ly_dur = int(dur_str)
- # TODO: We need to deal with dotted notes and scaled durations
- # otherwise ly_dur won't work in combination with tremolos.
- return dur_str
-
- def print_ly(self, outputter):
- dur_str = self.ly_expression(self.factor / outputter.duration_factor())
- outputter.print_duration_string(dur_str)
-
- def __repr__(self):
- return self.ly_expression()
-
- def copy(self):
- d = Duration()
- d.dots = self.dots
- d.duration_log = self.duration_log
- d.factor = self.factor
- return d
-
- def get_length(self):
- dot_fact = Fraction((1 << (1 + self.dots)) - 1,
- 1 << self.dots)
-
- log = abs(self.duration_log)
- dur = 1 << log
- if self.duration_log < 0:
- base = Fraction(dur)
- else:
- base = Fraction(1, dur)
-
- return base * dot_fact * self.factor
-
-
-def set_create_midi(option):
- """
- Implement the midi command line option '-m' and '--midi'.
- If True, add midi-block to .ly file (see L{musicexp.Score.print_ly}).
-
- @param option: Indicates whether the midi-block has to be added or not.
- @type option: boolean
- """
- global midi_option
- midi_option = option
-
-
-def get_create_midi():
- """
- Return, if exists the state of the midi-option.
-
- @return: The state of the midi-option.
- @rtype: boolean
- """
- try:
- return midi_option
- except NameError:
- return False
-
-# implement the command line option '--transpose'
-
-
-def set_transpose(option):
- global transpose_option
- transpose_option = option
-
-
-def get_transpose(optType):
- try:
- if optType == "string":
- return '\\transpose c %s' % transpose_option
- elif optType == "integer":
- p = generic_tone_to_pitch(transpose_option)
- return p.semitones()
- except Exception: ## TODO: find out what the possible exception is here.
- if optType == "string":
- return ""
- elif optType == "integer":
- return 0
-
-# implement the command line option '--tab-clef'
-
-
-def set_tab_clef(option):
- global tab_clef_option
- tab_clef_option = option
-
-
-def get_tab_clef():
- try:
- return ("tab", tab_clef_option)[tab_clef_option == "tab" or tab_clef_option == "moderntab"]
- except NameError:
- return "tab"
-
-# definitions of the command line option '--string-numbers'
-
-
-def set_string_numbers(option):
- global string_numbers_option
- string_numbers_option = option
-
-
-def get_string_numbers():
- try:
- return ("t", string_numbers_option)[string_numbers_option == "t" or string_numbers_option == "f"]
- except NameError:
- return "t"
-
-
-def generic_tone_to_pitch(tone):
- accidentals_dict = {
- "": 0,
- "es": -1,
- "s": -1,
- "eses": -2,
- "ses": -2,
- "is": 1,
- "isis": 2
- }
- p = Pitch()
- tone_ = tone.strip().lower()
- p.octave = tone_.count("'") - tone_.count(",")
- tone_ = tone_.replace(",", "").replace("'", "")
- p.step = ((ord(tone_[0]) - ord('a') + 5) % 7)
- p.alteration = accidentals_dict.get(tone_[1:], 0)
- return p
-
-# Implement the different note names for the various languages
-
-
-def pitch_generic(pitch, notenames, accidentals):
- s = notenames[pitch.step]
- halftones = int(pitch.alteration)
- if halftones < 0:
- s += accidentals[0] * (-halftones)
- elif pitch.alteration > 0:
- s += accidentals[3] * (halftones)
- # Handle remaining fraction to pitch.alteration (for microtones)
- if halftones != pitch.alteration:
- if None in accidentals[1:3]:
- ly.warning(
- _("Language does not support microtones contained in the piece"))
- else:
- try:
- s += {-0.5: accidentals[1], 0.5: accidentals[2]
- }[pitch.alteration - halftones]
- except KeyError:
- ly.warning(
- _("Language does not support microtones contained in the piece"))
- return s
-
-
-def pitch_general(pitch):
- s = pitch_generic(pitch, ['c', 'd', 'e', 'f', 'g', 'a', 'b'], [
- 'es', 'eh', 'ih', 'is'])
- if "h" in s: # no short forms for quarter tones
- return s
- return s.replace('aes', 'as').replace('ees', 'es')
-
-
-def pitch_nederlands(pitch):
- return pitch_general(pitch)
-
-
-def pitch_catalan(pitch):
- s = pitch_generic(pitch, ['do', 're', 'mi', 'fa', 'sol', 'la', 'si'], [
- 'b', 'qb', 'qd', 'd'])
- return s.replace('bq', 'tq').replace('dq', 'tq').replace('bt', 'c').replace('dt', 'c')
-
-
-def pitch_deutsch(pitch):
- s = pitch_generic(pitch, ['c', 'd', 'e', 'f', 'g', 'a', 'h'], [
- 'es', 'eh', 'ih', 'is'])
- if s == 'hes':
- return 'b'
- if s[0] == "a":
- return s.replace('e', 'a').replace('aa', 'a')
- return s.replace('ee', 'e')
-
-
-def pitch_english(pitch):
- s = pitch_generic(pitch, ['c', 'd', 'e', 'f', 'g', 'a', 'b'], [
- 'f', 'qf', 'qs', 's'])
- return s[0] + s[1:].replace('fq', 'tq').replace('sq', 'tq')
-
-
-def pitch_espanol(pitch):
- s = pitch_generic(pitch, ['do', 're', 'mi', 'fa', 'sol', 'la', 'si'], [
- 'b', 'cb', 'cs', 's'])
- return s.replace('bc', 'tc').replace('sc', 'tc')
-
-
-def pitch_francais(pitch):
- s = pitch_generic(pitch, ['do', 'ré', 'mi', 'fa', 'sol', 'la', 'si'], [
- 'b', 'sb', 'sd', 'd'])
- return s
-
-
-def pitch_italiano(pitch):
- s = pitch_generic(pitch, ['do', 're', 'mi', 'fa', 'sol', 'la', 'si'], [
- 'b', 'sb', 'sd', 'd'])
- return s
-
-
-def pitch_norsk(pitch):
- s = pitch_generic(pitch, ['c', 'd', 'e', 'f', 'g', 'a', 'h'], [
- 'ess', 'eh', 'ih', 'iss'])
- return s.replace('hess', 'b')
-
-
-def pitch_portugues(pitch):
- s = pitch_generic(pitch, ['do', 're', 'mi', 'fa', 'sol', 'la', 'si'], [
- 'b', 'bqt', 'sqt', 's'])
- return s.replace('bbq', 'btq').replace('ssq', 'stq')
-
-
-def pitch_suomi(pitch):
- s = pitch_generic(pitch, ['c', 'd', 'e', 'f', 'g', 'a', 'h'], [
- 'es', 'eh', 'ih', 'is'])
- if s == 'hes':
- return 'b'
- return s.replace('aes', 'as').replace('ees', 'es')
-
-
-def pitch_svenska(pitch):
- s = pitch_generic(pitch, ['c', 'd', 'e', 'f', 'g', 'a', 'h'], [
- 'ess', 'eh', 'ih', 'iss'])
- if s == 'hess':
- return 'b'
- return s.replace('aes', 'as').replace('ees', 'es')
-
-
-def pitch_vlaams(pitch):
- s = pitch_generic(pitch, ['do', 're', 'mi', 'fa', 'sol', 'la', 'si'], [
- 'b', 'hb', 'hk', 'k'])
- return s
-
-
-def set_pitch_language(language):
- global pitch_generating_function
- function_dict = {
- "nederlands": pitch_nederlands,
- "català": pitch_catalan,
- "deutsch": pitch_deutsch,
- "english": pitch_english,
- "español": pitch_espanol,
- "français": pitch_francais,
- "italiano": pitch_italiano,
- "norsk": pitch_norsk,
- "português": pitch_portugues,
- "suomi": pitch_suomi,
- "svenska": pitch_svenska,
- "vlaams": pitch_vlaams}
- pitch_generating_function = function_dict.get(language, pitch_general)
-
-
-# global variable to hold the formatting function.
-pitch_generating_function = pitch_general
-
-
-class Pitch:
- def __init__(self):
- self.alteration = 0
- self.step = 0
- self.octave = 0
- self._force_absolute_pitch = False
-
- def __repr__(self):
- return self.ly_expression()
-
- def transposed(self, interval):
- c = self.copy()
- c.alteration += interval.alteration
- c.step += interval.step
- c.octave += interval.octave
- c.normalize()
-
- target_st = self.semitones() + interval.semitones()
- c.alteration += target_st - c.semitones()
- return c
-
- def normalize(c):
- while c.step < 0:
- c.step += 7
- c.octave -= 1
- c.octave += c.step // 7
- c.step = c.step % 7
-
- def lisp_expression(self):
- return '(ly:make-pitch %d %d %d)' % (self.octave,
- self.step,
- self.alteration)
-
- def copy(self):
- p = Pitch()
- p.alteration = self.alteration
- p.step = self.step
- p.octave = self.octave
- p._force_absolute_pitch = self._force_absolute_pitch
- return p
-
- def steps(self):
- return self.step + self.octave * 7
-
- def semitones(self):
- return self.octave * 12 + [0, 2, 4, 5, 7, 9, 11][self.step] + self.alteration
-
- def normalize_alteration(c):
- if(c.alteration < 0 and [True, False, False, True, False, False, False][c.step]):
- c.alteration += 1
- c.step -= 1
- elif(c.alteration > 0 and [False, False, True, False, False, False, True][c.step]):
- c.alteration -= 1
- c.step += 1
- c.normalize()
-
- def add_semitones(self, number):
- semi = number + self.alteration
- self.alteration = 0
- if semi == 0:
- return
- sign = (1, -1)[semi < 0]
- prev = self.semitones()
- while abs((prev + semi) - self.semitones()) > 1:
- self.step += sign
- self.normalize()
- self.alteration += (prev + semi) - self.semitones()
- self.normalize_alteration()
-
- def ly_step_expression(self):
- return pitch_generating_function(self)
-
- def absolute_pitch(self):
- if self.octave >= 0:
- return "'" * (self.octave + 1)
- elif self.octave < -1:
- return "," * (-self.octave - 1)
- else:
- return ''
-
- def relative_pitch(self):
- global previous_pitch
- if not previous_pitch:
- previous_pitch = self
- return self.absolute_pitch()
- previous_pitch_steps = previous_pitch.octave * 7 + previous_pitch.step
- this_pitch_steps = self.octave * 7 + self.step
- pitch_diff = (this_pitch_steps - previous_pitch_steps)
- previous_pitch = self
- if pitch_diff > 3:
- return "'" * ((pitch_diff + 3) // 7)
- elif pitch_diff < -3:
- return "," * ((-pitch_diff + 3) // 7)
- else:
- return ""
-
- def ly_expression(self):
- s = self.ly_step_expression()
- if relative_pitches and not self._force_absolute_pitch:
- s += self.relative_pitch()
- else:
- s += self.absolute_pitch()
- return s
-
- def print_ly(self, outputter):
- outputter(self.ly_expression())
-
-
-class Music:
- def __init__(self):
- self.parent = None
- self.start = Fraction(0)
- self.comment = ''
- self.identifier = None
-
- def get_length(self):
- return Fraction(0)
-
- def get_properties(self):
- return ''
-
- def has_children(self):
- return False
-
- def get_index(self):
- if self.parent:
- return self.parent.elements.index(self)
- else:
- return None
-
- def name(self):
- return self.__class__.__name__
-
- def lisp_expression(self):
- name = self.name()
-
- props = self.get_properties()
-
- return "(make-music '%s %s)" % (name, props)
-
- def set_start(self, start):
- self.start = start
-
- def find_first(self, predicate):
- if predicate(self):
- return self
- return None
-
- def print_comment(self, printer, text=None):
- if not text:
- text = self.comment
-
- if not text:
- return
-
- if text == '\n':
- printer.newline()
- return
-
- lines = text.split('\n')
- for l in lines:
- if l:
- printer.unformatted_output('% ' + l)
- printer.newline()
-
- def print_with_identifier(self, printer):
- if self.identifier:
- printer("\\%s" % self.identifier)
- else:
- self.print_ly(printer)
-
- def print_ly(self, printer):
- printer(self.ly_expression())
-
-
-class MusicWrapper (Music):
- def __init__(self):
- Music.__init__(self)
- self.element = None
-
- def print_ly(self, func):
- self.element.print_ly(func)
-
-
-class ModeChangingMusicWrapper (MusicWrapper):
- def __init__(self):
- MusicWrapper.__init__(self)
- self.mode = 'notemode'
-
- def print_ly(self, func):
- func('\\%s' % self.mode)
- MusicWrapper.print_ly(self, func)
-
-
-class RelativeMusic (MusicWrapper):
- def __init__(self):
- MusicWrapper.__init__(self)
- self.basepitch = None
-
- def print_ly(self, func):
- global previous_pitch
- global relative_pitches
- prev_relative_pitches = relative_pitches
- relative_pitches = True
- previous_pitch = self.basepitch
- if not previous_pitch:
- previous_pitch = Pitch()
- func('\\relative %s%s' % (pitch_generating_function(previous_pitch),
- previous_pitch.absolute_pitch()))
- MusicWrapper.print_ly(self, func)
- relative_pitches = prev_relative_pitches
-
-
-class TimeScaledMusic (MusicWrapper):
- def __init__(self):
- MusicWrapper.__init__(self)
- self.numerator = 1
- self.denominator = 1
- self.display_number = "actual" # valid values "actual" | "both" | None
- # Display the basic note length for the tuplet:
- self.display_type = None # value values "actual" | "both" | None
- self.display_bracket = "bracket" # valid values "bracket" | "curved" | None
- self.actual_type = None # The actually played unit of the scaling
- self.normal_type = None # The basic unit of the scaling
- self.display_numerator = None
- self.display_denominator = None
-
- def print_ly(self, func):
- if self.display_bracket is None:
- func("\\once \\omit TupletBracket")
- func.newline()
- elif self.display_bracket == "curved":
- ly.warning(
- _("Tuplet brackets of curved shape are not correctly implemented"))
- func("\\once \\override TupletBracket.stencil = #ly:slur::print")
- func.newline()
-
- base_number_function = {None: "#f",
- "actual": "tuplet-number::calc-denominator-text",
- "both": "tuplet-number::calc-fraction-text"}.get(self.display_number, None)
- # If we have non-standard numerator/denominator, use our custom function
- if self.display_number == "actual" and self.display_denominator:
- base_number_function = "(tuplet-number::non-default-tuplet-denominator-text %s)" % self.display_denominator
- elif self.display_number == "both" and (self.display_denominator or self.display_numerator):
- if self.display_numerator:
- num = self.display_numerator
- else:
- num = "#f"
- if self.display_denominator:
- den = self.display_denominator
- else:
- den = "#f"
- base_number_function = "(tuplet-number::non-default-tuplet-fraction-text %s %s)" % (
- den, num)
-
- if self.display_type == "actual" and self.normal_type:
- base_duration = self.normal_type.lisp_expression()
- func("\\once \\override TupletNumber.text = #(tuplet-number::append-note-wrapper %s %s)" %
- (base_number_function, base_duration))
- func.newline()
- elif self.display_type == "both": # TODO: Implement this using actual_type and normal_type!
- if self.display_number is None:
- func("\\once \\omit TupletNumber")
- func.newline()
- elif self.display_number == "both":
- den_duration = self.normal_type.lisp_expression()
- # If we don't have an actual type set, use the normal duration!
- if self.actual_type:
- num_duration = self.actual_type.lisp_expression()
- else:
- num_duration = den_duration
- if (self.display_denominator or self.display_numerator):
- func("\\once \\override TupletNumber.text = #(tuplet-number::non-default-fraction-with-notes %s %s %s %s)" %
- (self.display_denominator, den_duration,
- self.display_numerator, num_duration))
- func.newline()
- else:
- func("\\once \\override TupletNumber.text = #(tuplet-number::fraction-with-notes %s %s)" %
- (den_duration, num_duration))
- func.newline()
- else:
- if self.display_number is None:
- func("\\once \\omit TupletNumber")
- func.newline()
- elif self.display_number == "both":
- func("\\once \\override TupletNumber.text = #%s" %
- base_number_function)
- func.newline()
-
- func('\\times %d/%d ' %
- (self.numerator, self.denominator))
- func.add_factor(Fraction(self.numerator, self.denominator))
- MusicWrapper.print_ly(self, func)
- func.revert()
-
-
-class NestedMusic(Music):
- def __init__(self):
- Music.__init__(self)
- self.elements = []
-
- def append(self, what):
- if what:
- self.elements.append(what)
-
- def has_children(self):
- return self.elements
-
- def insert_around(self, succ, elt, dir):
- assert elt.parent is None
- assert succ is None or succ in self.elements
-
- idx = 0
- if succ:
- idx = self.elements.index(succ)
- if dir > 0:
- idx += 1
- else:
- if dir < 0:
- idx = 0
- elif dir > 0:
- idx = len(self.elements)
-
- self.elements.insert(idx, elt)
- elt.parent = self
-
- def get_properties(self):
- return ("'elements (list %s)"
- % " ".join([x.lisp_expression() for x in self.elements]))
-
- def get_subset_properties(self, predicate):
- return ("'elements (list %s)"
- % " ".join([x.lisp_expression() for x in list(filter(predicate, self.elements))]))
-
- def get_neighbor(self, music, dir):
- assert music.parent == self
- idx = self.elements.index(music)
- idx += dir
- idx = min(idx, len(self.elements) - 1)
- idx = max(idx, 0)
-
- return self.elements[idx]
-
- def delete_element(self, element):
- assert element in self.elements
-
- self.elements.remove(element)
- element.parent = None
-
- def set_start(self, start):
- self.start = start
- for e in self.elements:
- e.set_start(start)
-
- def find_first(self, predicate):
- r = Music.find_first(self, predicate)
- if r:
- return r
-
- for e in self.elements:
- r = e.find_first(predicate)
- if r:
- return r
- return None
-
-
-class SequentialMusic (NestedMusic):
- def get_last_event_chord(self):
- value = None
- at = len(self.elements) - 1
- while (at >= 0 and
- not isinstance(self.elements[at], ChordEvent) and
- not isinstance(self.elements[at], BarLine)):
- at -= 1
-
- if (at >= 0 and isinstance(self.elements[at], ChordEvent)):
- value = self.elements[at]
- return value
-
- def print_ly(self, printer, newline=True):
- printer('{')
- if self.comment:
- self.print_comment(printer)
-
- if newline:
- printer.newline()
- for e in self.elements:
- e.print_ly(printer)
-
- printer('}')
- if newline:
- printer.newline()
-
- def lisp_sub_expression(self, pred):
- name = self.name()
-
- props = self.get_subset_properties(pred)
-
- return "(make-music '%s %s)" % (name, props)
-
- def set_start(self, start):
- for e in self.elements:
- e.set_start(start)
- start += e.get_length()
-
-
-class RepeatedMusic:
- def __init__(self):
- self.repeat_type = "volta"
- self.repeat_count = 2
- self.endings = []
- self.music = None
-
- def set_music(self, music):
- if isinstance(music, Music):
- self.music = music
- elif isinstance(music, list):
- self.music = SequentialMusic()
- self.music.elements = music
- else:
- ly.warning(_("unable to set the music %(music)s for the repeat %(repeat)s") %
- {'music': music, 'repeat': self})
-
- def add_ending(self, music):
- self.endings.append(music)
-
- def print_ly(self, printer):
- printer.dump('\\repeat %s %s' % (self.repeat_type, self.repeat_count))
- if self.music:
- self.music.print_ly(printer)
- else:
- ly.warning(_("encountered repeat without body"))
- printer.dump('{}')
- if self.endings:
- printer.dump('\\alternative {')
- for e in self.endings:
- e.print_ly(printer)
- printer.dump('}')
-
-
-class Lyrics:
- def __init__(self):
- self.lyrics_syllables = []
-
- def print_ly(self, printer):
- printer.dump(self.ly_expression())
- printer.newline()
- printer.dump('}')
- printer.newline()
-
- def ly_expression(self):
- lstr = r"\lyricmode {\set ignoreMelismata = ##t"
- for l in self.lyrics_syllables:
- lstr += l
- #lstr += "\n}"
- return lstr
-
-
-class Header:
-
- def __init__(self):
- self.header_fields = {}
-
- def set_field(self, field, value):
- self.header_fields[field] = value
-
- def format_header_strings(self, key, value, printer):
- printer.dump(key + ' = ')
-
- # If a header item contains a line break, it is segmented. The
- # substrings are formatted with the help of \markup, using
- # \column and \line. An exception, however, are texidoc items,
- # which should not contain LilyPond formatting commands.
- if (key != 'texidoc') and ('\n' in value):
- value = value.replace('"', '')
- printer.dump(r'\markup \column {')
- substrings = value.split('\n')
- for s in substrings:
- printer.newline()
- printer.dump(r'\line { "' + s + '"}')
- printer.dump('}')
- printer.newline()
- else:
- printer.dump(value)
- printer.newline()
-
- def print_ly(self, printer):
- printer.dump(r"\header {")
- printer.newline()
- for (k, v) in list(self.header_fields.items()):
- if v:
- self.format_header_strings(k, v, printer)
- # printer.newline()
- printer.dump("}")
- printer.newline()
- printer.newline()
-
-
-class Paper:
- def __init__(self):
- self.global_staff_size = -1
- # page size
- self.page_width = -1
- self.page_height = -1
- # page margins
- self.top_margin = -1
- self.bottom_margin = -1
- self.left_margin = -1
- self.right_margin = -1
- self.system_left_margin = -1
- self.system_right_margin = -1
- self.system_distance = -1
- self.top_system_distance = -1
- self.indent = 0
- self.short_indent = 0
- self.instrument_names = []
-
- def print_length_field(self, printer, field, value):
- if value >= 0:
- printer.dump("%s = %s\\cm" % (field, value))
- printer.newline()
-
- def get_longest_instrument_name(self):
- result = ''
- for name in self.instrument_names:
- lines = name.split('\n')
- for line in lines:
- if len(line) > len(result):
- result = line
- return result
-
- def print_ly(self, printer):
- if self.global_staff_size > 0:
- printer.dump('#(set-global-staff-size %s)' %
- self.global_staff_size)
- printer.newline()
- printer.dump('\\paper {')
- printer.newline()
- printer.newline()
- self.print_length_field(printer, "paper-width", self.page_width)
- self.print_length_field(printer, "paper-height", self.page_height)
- self.print_length_field(printer, "top-margin", self.top_margin)
- self.print_length_field(printer, "bottom-margin", self.bottom_margin)
- self.print_length_field(printer, "left-margin", self.left_margin)
- # TODO: maybe set line-width instead of right-margin?
- self.print_length_field(printer, "right-margin", self.right_margin)
- # TODO: What's the corresponding setting for system_left_margin and
- # system_right_margin in LilyPond?
- self.print_length_field(
- printer, "between-system-space", self.system_distance)
- self.print_length_field(
- printer, "page-top-space", self.top_system_distance)
- # TODO: Compute the indentation with the instrument name lengths
-
- # TODO: font width ?
- char_per_cm = (len(self.get_longest_instrument_name())
- * 13) / self.page_width
- if self.indent != 0:
- self.print_length_field(printer, "indent", self.indent/char_per_cm)
- if self.short_indent != 0:
- self.print_length_field(
- printer, "short-indent", self.short_indent/char_per_cm)
-
- printer.dump('}')
- printer.newline()
-
-
-class Layout:
- def __init__(self):
- self.context_dict = {}
-
- def add_context(self, context):
- if context not in self.context_dict:
- self.context_dict[context] = []
-
- def set_context_item(self, context, item):
- self.add_context(context)
- if not item in self.context_dict[context]:
- self.context_dict[context].append(item)
-
- def print_ly(self, printer):
- if list(self.context_dict.items()):
- printer.dump('\\layout {')
- printer.newline()
- for (context, defs) in list(self.context_dict.items()):
- printer.dump('\\context { \\%s' % context)
- printer.newline()
- for d in defs:
- printer.dump(d)
- printer.newline()
- printer.dump('}')
- printer.newline()
- printer.dump('}')
- printer.newline()
-
-
-class ChordEvent (NestedMusic):
- def __init__(self):
- NestedMusic.__init__(self)
- self.after_grace_elements = None
- self.grace_elements = None
- self.grace_type = None
-
- def append_grace(self, element):
- if element:
- if not self.grace_elements:
- self.grace_elements = SequentialMusic()
- self.grace_elements.append(element)
-
- def append_after_grace(self, element):
- if element:
- if not self.after_grace_elements:
- self.after_grace_elements = SequentialMusic()
- self.after_grace_elements.append(element)
-
- def has_elements(self):
- return [e for e in self.elements if
- isinstance(e, NoteEvent) or isinstance(e, RestEvent)] != []
-
- def get_length(self):
- l = Fraction(0)
- for e in self.elements:
- l = max(l, e.get_length())
- return l
-
- def get_duration(self):
- note_events = [e for e in self.elements if
- isinstance(e, NoteEvent) or isinstance(e, RestEvent)]
- if note_events:
- return note_events[0].duration
- else:
- return None
-
- def print_ly(self, printer):
- note_events = [e for e in self.elements if
- isinstance(e, NoteEvent)]
-
- rest_events = [e for e in self.elements if
- isinstance(e, RhythmicEvent)
- and not isinstance(e, NoteEvent)]
-
- other_events = [e for e in self.elements if
- not isinstance(e, RhythmicEvent)]
-
- if self.after_grace_elements:
- printer('\\afterGrace {')
-
- if self.grace_elements and self.elements:
- if self.grace_type:
- printer('\\%s' % self.grace_type)
- else:
- printer('\\grace')
- # don't print newlines after the { and } braces
- self.grace_elements.print_ly(printer, False)
- elif self.grace_elements: # no self.elements!
- ly.warning(_("Grace note with no following music: %s") %
- self.grace_elements)
- if self.grace_type:
- printer('\\%s' % self.grace_type)
- else:
- printer('\\grace')
- self.grace_elements.print_ly(printer, False)
- printer('{}')
-
- # Print all overrides and other settings needed by the
- # articulations/ornaments before the note
-
- for e in other_events:
- if not hasattr(e, 'print_before_note'):
- continue
- e.print_before_note(printer)
-
- if rest_events:
- rest_events[0].print_ly(printer)
- elif len(note_events) == 1:
- note_events[0].print_ly(printer)
- elif note_events:
- global previous_pitch
- pitches = []
- basepitch = None
- stem = None
- for x in note_events:
- if x.associated_events:
- for aev in x.associated_events:
- if (isinstance(aev, StemEvent) and aev.value):
- stem = aev
- pitches.append(x.chord_element_ly())
- if not basepitch:
- basepitch = previous_pitch
- if stem:
- printer(stem.ly_expression())
- printer('<%s>' % ' '.join(pitches))
- previous_pitch = basepitch
- duration = self.get_duration()
- if duration:
- duration.print_ly(printer)
- else:
- pass
-
- for e in other_events:
- e.print_ly(printer)
-
- for e in other_events:
- if not hasattr(e, 'print_after_note'):
- continue
- e.print_after_note(printer)
-
- if self.after_grace_elements:
- printer('}')
- self.after_grace_elements.print_ly(printer, False)
-
- self.print_comment(printer)
-
-
-class Partial (Music):
- def __init__(self):
- Music.__init__(self)
- self.partial = None
-
- def print_ly(self, printer):
- if self.partial:
- printer.dump("\\partial %s" % self.partial.ly_expression())
-
-
-class BarLine (Music):
- def __init__(self):
- Music.__init__(self)
- self.bar_number = 0
- self.type = None
-
- def print_ly(self, printer):
- bar_symbol = {
- 'dashed': '!',
- 'dotted': ';',
- 'heavy': '.',
- 'heavy-heavy': '..',
- 'heavy-light': '.|',
- 'light-heavy': '|.',
- 'light-light': '||',
- 'none': '',
- 'regular': '|',
- 'short': ',',
- 'tick': "'"}.get(self.type, None)
- if bar_symbol is not None:
- printer.dump('\\bar "%s"' % bar_symbol)
- else:
- printer.dump("|")
-
- if self.bar_number > 0 and (self.bar_number % 10) == 0:
- printer.dump("\\barNumberCheck #%d " % self.bar_number)
- elif self.bar_number > 0:
- printer.print_verbatim(' %% %d' % self.bar_number)
- printer.newline()
-
- def ly_expression(self):
- return " | "
-
-
-class Event(Music):
- def __init__(self):
- # strings to print before the note to which an event is attached.
- # Ignored for notes etc.
- super(Event, self).__init__()
- self.before_note = None
- self.after_note = None
- # print something before the note to which an event is attached, e.g. overrides
-
- def print_before_note(self, printer):
- if self.before_note:
- printer.dump(self.before_note)
- # print something after the note to which an event is attached, e.g. resetting
-
- def print_after_note(self, printer):
- if self.after_note:
- printer.dump(self.after_note)
- pass
-
-
-class SpanEvent (Event):
- def __init__(self):
- Event.__init__(self)
- self.span_direction = 0 # start/stop
- self.line_type = 'solid'
- self.span_type = 0 # e.g. cres/decrescendo, ottava up/down
- self.size = 0 # size of e.g. octave shift
-
- def wait_for_note(self):
- return True
-
- def get_properties(self):
- return "'span-direction %d" % self.span_direction
-
- def set_span_type(self, type):
- self.span_type = type
-
-
-class BreatheEvent (Event):
- def __init__(self):
- super().__init__()
- self.after_note = "\\breathe"
-
- def ly_expression(self):
- return ''
-
-
-class CaesuraEvent (Event):
- def __init__(self):
- super().__init__()
- self.after_note = "\\caesura"
-
- def ly_expression(self):
- return ''
-
-
-class SlurEvent (SpanEvent):
- def print_before_note(self, printer):
- command = {'dotted': '\\slurDotted',
- 'dashed': '\\slurDashed'}.get(self.line_type, '')
- if command and self.span_direction == -1:
- printer.dump(command)
-
- def print_after_note(self, printer):
- # reset non-solid slur types!
- command = {'dotted': '\\slurSolid',
- 'dashed': '\\slurSolid'}.get(self.line_type, '')
- if command and self.span_direction == -1:
- printer.dump(command)
-
- def ly_expression(self):
- return {-1: '(', 1: ')'}.get(self.span_direction, '')
-
-
-class BeamEvent (SpanEvent):
- def ly_expression(self):
- return {-1: '[', 1: ']'}.get(self.span_direction, '')
-
-
-class PedalEvent (SpanEvent):
- def ly_expression(self):
- return {-1: '\\sustainOn',
- 0: '\\sustainOff\\sustainOn',
- 1: '\\sustainOff'}.get(self.span_direction, '')
-
-
-class TextSpannerEvent (SpanEvent):
- def print_before_note(self, printer):
- if hasattr(self, 'style') and self.style == "wave":
- printer.dump(r"\once \override TextSpanner.style = #'trill")
- if hasattr(self, 'force_direction'):
- x = {-1: '\\textSpannerDown', 0: '\\textSpannerNeutral',
- 1: '\\textSpannerUp'}.get(self.force_direction, '')
- printer.dump(x)
- def print_after_note(self, printer):
- pass
-
- def ly_expression(self):
- global whatOrnament
- if hasattr(self, 'style') and self.style == "ignore":
- return ""
- # if self.style=="wave":
- if whatOrnament == "wave":
- return {-1: '\\startTextSpan',
- 1: '\\stopTextSpan'}.get(self.span_direction, '')
- else:
- if hasattr(self, 'style') and self.style == "stop" and whatOrnament != "trill":
- return ""
- return {-1: '\\startTrillSpan',
- 1: '\\stopTrillSpan'}.get(self.span_direction, '')
-
-
-class BracketSpannerEvent (SpanEvent):
- # Ligature brackets use prefix-notation!!!
- def print_before_note(self, printer):
- if self.span_direction == -1:
- if self.force_direction == 1:
- printer.dump(r"\once \override LigatureBracket.direction = #UP")
- elif self.force_direction == -1:
- printer.dump(
- r"\once \override LigatureBracket.direction = #DOWN")
- printer.dump(r'\[')
- # the bracket after the last note
-
- def print_after_note(self, printer):
- if self.span_direction == 1:
- printer.dump(r'\]')
- # we're printing everything in print_(before|after)_note...
-
- def ly_expression(self):
- return ''
-
-
-class OctaveShiftEvent (SpanEvent):
- def wait_for_note(self):
- return False
-
- def set_span_type(self, type):
- self.span_type = {'up': 1, 'down': -1}.get(type, 0)
-
- def ly_octave_shift_indicator(self):
- # convert 8/15 to lilypond indicators (+-1/+-2)
- try:
- value = {8: 1, 15: 2}[self.size]
- except KeyError:
- ly.warning(
- _("Invalid octave shift size found: %s. Using no shift.") % self.size)
- value = 0
- # negative values go up!
- value *= -1 * self.span_type
- return value
-
- def ly_expression(self):
- dir = self.ly_octave_shift_indicator()
- value = ''
- if dir:
- value = r'\ottava #%s' % dir
- return {
- - 1: value,
- 1: r'\ottava #0'}.get(self.span_direction, '')
-
-
-class TrillSpanEvent (SpanEvent):
- def ly_expression(self):
- return {-1: '\\startTrillSpan',
- 0: '', # no need to write out anything for type='continue'
- 1: '\\stopTrillSpan'}.get(self.span_direction, '')
-
-
-class GlissandoEvent (SpanEvent):
- def print_before_note(self, printer):
- if self.span_direction == -1:
- style = {
- "dashed": "dashed-line",
- "dotted": "dotted-line",
- "wavy": "zigzag"
- }. get(self.line_type, None)
- if style:
- printer.dump(
- "\\once \\override Glissando.style = #'%s" % style)
-
- def ly_expression(self):
- return {-1: '\\glissando',
- 1: ''}.get(self.span_direction, '')
-
-
-class ArpeggioEvent(Event):
- def __init__(self):
- Event.__init__(self)
- self.direction = 0
- self.non_arpeggiate = False
-
- def wait_for_note(self):
- return True
-
- def print_before_note(self, printer):
- if self.non_arpeggiate:
- printer.dump("\\arpeggioBracket")
- else:
- dir = {-1: "\\arpeggioArrowDown",
- 1: "\\arpeggioArrowUp"}.get(self.direction, '')
- if dir:
- printer.dump(dir)
-
- def print_after_note(self, printer):
- if self.non_arpeggiate or self.direction:
- printer.dump("\\arpeggioNormal")
-
- def ly_expression(self):
- return '\\arpeggio'
-
-
-class TieEvent(Event):
- def ly_expression(self):
- return '~'
-
-
-class HairpinEvent (SpanEvent):
- def set_span_type(self, type):
- self.span_type = {'crescendo': 1, 'decrescendo': -
- 1, 'diminuendo': -1}.get(type, 0)
-
- def hairpin_to_ly(self):
- if self.span_direction == 1:
- return r'\!'
- else:
- return {1: r'\<', -1: r'\>'}.get(self.span_type, '')
-
- def direction_mod(self):
- return {1: '^', -1: '_', 0: '-'}.get(self.force_direction, '-')
-
- def ly_expression(self):
- return self.hairpin_to_ly()
-
- def print_ly(self, printer):
- val = self.hairpin_to_ly()
- if val:
- # printer.dump (val)
- printer.dump('%s%s' % (self.direction_mod(), val))
-
-
-class DynamicsEvent (Event):
- def __init__(self):
- Event.__init__(self)
- self.type = None
- self.force_direction = 0
-
- def wait_for_note(self):
- return True
-
- def ly_expression(self):
- if self.type:
- return r'\%s' % self.type
- else:
- return
-
- def direction_mod(self):
- return {1: '^', -1: '_', 0: '-'}.get(self.force_direction, '-')
-
- def print_ly(self, printer):
- if self.type:
- printer.dump('%s\\%s' % (self.direction_mod(), self.type))
-
-
-class MarkEvent (Event):
- def __init__(self, text="\\default"):
- Event.__init__(self)
- self.mark = text
-
- def wait_for_note(self):
- return False
-
- def ly_contents(self):
- if self.mark:
- return '%s' % self.mark
- else:
- return "\"ERROR\""
-
- def ly_expression(self):
- return '\\mark %s' % self.ly_contents()
-
-
-class MusicGlyphMarkEvent (MarkEvent):
- def ly_contents(self):
- if self.mark:
- return '\\markup { \\musicglyph "scripts.%s" }' % self.mark
- else:
- return ''
-
-
-class TextEvent (Event):
- def __init__(self):
- Event.__init__(self)
- self.Text = None
- self.force_direction = None
- self.markup = ''
-
- def wait_for_note(self):
- r""" This is problematic: the lilypond-markup ^"text"
- requires wait_for_note to be true. Otherwise the
- compilation will fail. So we are forced to set return to True.
- But in some cases this might lead to a wrong placement of the text.
- In case of words like Allegro the text should be put in a '\tempo'-command.
- In this case we don't want to wait for the next note.
- In some other cases the text is supposed to be used in a r'\mark\markup' construct.
- We would not want to wait for the next note either.
- There might be other problematic situations.
- In the long run we should differentiate between various contexts in MusicXML, e.g.
- the following markup should be interpreted as '\tempo "Allegretto"':
-
-
- Allegretto
-
-
-
- In the mean time arising problems have to be corrected manually after the conversion.
- """
- return True
-
- def direction_mod(self):
- """ 1: placement="above"; -1: placement="below"; 0: no placement attribute.
- see musicxml_direction_to_indicator in musicxml2ly_conversion.py """
- return {1: '^', -1: '_', 0: '-'}.get(self.force_direction, '-')
-
- def ly_expression(self):
- # self.text will be enclosed by quotes, and the direction
- # modifier must be separated from the opening quote by a space.
- # This is so that subsequent line breaking for the output file
- # using utilities.split_string_and_preserve_doublequoted_strings()
- # properly detects the opening quote.
- base_string = '%s \"%s\"'
- if self.markup:
- base_string = r'%s\markup{ ' + self.markup + ' {%s} }'
- return base_string % (self.direction_mod(), self.text)
-
-
-class ArticulationEvent (Event):
- def __init__(self):
- Event.__init__(self)
- self.type = None
- self.force_direction = None
-
- def wait_for_note(self):
- return True
-
- def direction_mod(self):
- return {1: '^', -1: '_', 0: '-'}.get(self.force_direction, '')
-
- def ly_expression(self):
- return '%s\\%s' % (self.direction_mod(), self.type)
-
-
-class ShortArticulationEvent (ArticulationEvent):
- def direction_mod(self):
- # default is -
- return {1: '^', -1: '_', 0: '-'}.get(self.force_direction, '-')
-
- def ly_expression(self):
- if self.type:
- return '%s%s' % (self.direction_mod(), self.type)
- else:
- return ''
-
-
-class NoDirectionArticulationEvent (ArticulationEvent):
- def ly_expression(self):
- if self.type:
- return '\\%s' % self.type
- else:
- return ''
-
-class MarkupEvent (ShortArticulationEvent):
- def __init__(self):
- ArticulationEvent.__init__(self)
- self.contents = None
-
- def ly_expression(self):
- if self.contents:
- return "%s\\markup { %s }" % (self.direction_mod(), self.contents)
- else:
- return ''
-
-
-class FretEvent (MarkupEvent):
- def __init__(self):
- MarkupEvent.__init__(self)
- self.force_direction = 1
- self.strings = 6
- self.frets = 4
- self.barre = None
- self.elements = []
-
- def ly_expression(self):
- val = ""
- if self.strings != 6:
- val += "w:%s;" % self.strings
- if self.frets != 4:
- val += "h:%s;" % self.frets
- if self.barre and len(self.barre) >= 3:
- val += "c:%s-%s-%s;" % (self.barre[0], self.barre[1],
- self.barre[2]+get_transpose("integer"))
- have_fingering = False
- for i in self.elements:
- if len(i) > 1:
- val += "%s-%s" % (i[0], i[1]+(get_transpose("integer"),
- '')[isinstance(i[1], str)])
- if len(i) > 2:
- have_fingering = True
- val += "-%s" % i[2]
- val += ";"
- if have_fingering:
- val = "f:1;" + val
- if val:
- return "%s\\markup { \\fret-diagram #\"%s\" }" % (self.direction_mod(), val)
- else:
- return ''
-
-
-class FretBoardNote (Music):
- def __init__(self):
- Music.__init__(self)
- self.pitch = None
- self.string = None
- self.fingering = None
-
- def ly_expression(self):
- s = self.pitch.ly_expression()
- if self.fingering:
- s += "-%s" % self.fingering
- if self.string:
- s += r"\%s" % self.string
- return s
-
-
-class FretBoardEvent (NestedMusic):
- def __init__(self):
- NestedMusic.__init__(self)
- self.duration = None
-
- def print_ly(self, printer):
- fretboard_notes = [
- n for n in self.elements if isinstance(n, FretBoardNote)]
- if fretboard_notes:
- notes = []
- for n in fretboard_notes:
- notes.append(n.ly_expression())
- contents = ' '.join(notes)
- printer('<%s>%s' % (contents, self.duration))
-
-
-class FunctionWrapperEvent (Event):
- def __init__(self, function_name=None):
- Event.__init__(self)
- self.function_name = function_name
-
- def pre_note_ly(self, is_chord_element):
- if self.function_name:
- return "\\%s" % self.function_name
- else:
- return ''
-
- def pre_chord_ly(self):
- return ''
-
- def ly_expression(self):
- if self.function_name:
- return "\\%s" % self.function_name
- else:
- return ''
-
-
-class ParenthesizeEvent (FunctionWrapperEvent):
- def __init__(self):
- FunctionWrapperEvent.__init__(self, "parenthesize")
-
-
-class StemEvent (Event):
- """"
- A class to take care of stem values (up, down, double, none)
- """
-
- def __init__(self):
- Event.__init__(self)
- self.value = None
-
- def pre_chord_ly(self):
- if self.value:
- return "\\%s" % self.value
- else:
- return ''
-
- def pre_note_ly(self, is_chord_element):
- return ''
-
- def ly_expression(self):
- return self.pre_chord_ly()
-
-
-class NotestyleEvent (Event): # class changed by DaLa: additional attribute color
- def __init__(self):
- Event.__init__(self)
- self.style = None
- self.filled = None
- self.color = None
-
- def pre_chord_ly(self):
- return_string = ''
- if self.style:
- return_string += " \\once \\override NoteHead.style = #%s" % self.style
- if self.color:
- return_string += " \\once \\override NoteHead.color = #(rgb-color %s %s %s)" % (
- self.color[0], self.color[1], self.color[2])
- return return_string
-
- def pre_note_ly(self, is_chord_element):
- if self.style and is_chord_element:
- return "\\tweak style #%s" % self.style
- else:
- return ''
-
- def ly_expression(self):
- return self.pre_chord_ly()
-
-
-class StemstyleEvent (Event): # class added by DaLa
- def __init__(self):
- Event.__init__(self)
- self.color = None
-
- def pre_chord_ly(self):
- if self.color:
- return "\\once \\override Stem.color = #(rgb-color %s %s %s)" % (self.color[0], self.color[1], self.color[2])
- else:
- return ''
-
- def pre_note_ly(self, is_chord_element):
- return ''
-
- def ly_expression(self):
- return self.pre_chord_ly()
-
-
-class ChordPitch:
- def __init__(self):
- self.alteration = 0
- self.step = 0
-
- def __repr__(self):
- return self.ly_expression()
-
- def ly_expression(self):
- return pitch_generating_function(self)
-
-
-class ChordModification:
- def __init__(self):
- self.alteration = 0
- self.step = 0
- self.type = 0
-
- def ly_expression(self):
- if self.type:
- val = {1: ".", -1: "^"}.get(self.type, "")
- val += "%s" % self.step
- val += {1: "+", -1: "-"}.get(self.alteration, "")
- return val
- else:
- return ''
-
-
-class ChordNameEvent (Event):
- def __init__(self):
- Event.__init__(self)
- self.root = None
- self.kind = None
- self.duration = None
- self.modifications = []
- self.bass = None
-
- def add_modification(self, mod):
- self.modifications.append(mod)
-
- def ly_expression(self):
-
- if not self.root:
- return ''
- value = self.root.ly_expression()
- if self.duration:
- value += self.duration.ly_expression()
- if self.kind:
- value = value + self.kind
- # First print all additions/changes, and only afterwards all subtractions
- for m in self.modifications:
- if m.type == 1:
- value += m.ly_expression()
- for m in self.modifications:
- if m.type == -1:
- value += m.ly_expression()
- if self.bass:
- value += "/+%s" % self.bass.ly_expression()
- return value
-
-
-class TremoloEvent(ArticulationEvent):
- def __init__(self):
- Event.__init__(self)
- self.strokes = 0
-
- def ly_expression(self):
- ly_str = ''
- if self.strokes and int(self.strokes) > 0:
- # ly_dur is a global variable defined in class Duration
- # ly_dur stores the value of the reciprocal values of notes
- # ly_dur is used here to check the current note duration
- # if the duration is smaller than 8, e.g.
- # quarter, half and whole notes,
- # `:(2 ** (2 + number of tremolo strokes))'
- # should be appended to the pitch and duration, e.g.
- # 1 stroke: `c4:8' or `c2:8' or `c1:8'
- # 2 strokes: `c4:16' or `c2:16' or `c1:16'
- # ...
- # else (if ly_dur is equal to or greater than 8):
- # we need to make sure that the tremolo value that is to
- # be appended to the pitch and duration is twice the
- # duration (if there is only one tremolo stroke.
- # Each additional stroke doubles the tremolo value, e.g.:
- # 1 stroke: `c8:16', `c16:32', `c32:64', ...
- # 2 strokes: `c8:32', `c16:64', `c32:128', ...
- # ...
- if ly_dur < 8:
- ly_str += ':%s' % (2 ** (2 + int(self.strokes)))
- else:
- ly_str += ':%s' % (2 **
- int((math.log(ly_dur, 2)) + int(self.strokes)))
- return ly_str
-
-
-class BendEvent (ArticulationEvent):
- def __init__(self):
- Event.__init__(self)
- self.alter = None
-
- def ly_expression(self):
- if self.alter is not None:
- return "-\\bendAfter #%s" % self.alter
- else:
- return ''
-
-
-class RhythmicEvent(Event):
- def __init__(self):
- Event.__init__(self)
- self.duration = Duration()
- self.associated_events = []
-
- def add_associated_event(self, ev):
- if ev:
- self.associated_events.append(ev)
-
- def pre_chord_ly(self):
- return [ev.pre_chord_ly() for ev in self.associated_events]
-
- def pre_note_ly(self, is_chord_element):
- return [ev.pre_note_ly(is_chord_element) for ev in self.associated_events]
-
- def ly_expression_pre_note(self, is_chord_element):
- res = ' '.join(self.pre_note_ly(is_chord_element))
- if res != '':
- res = res + ' '
- return res
-
- def get_length(self):
- return self.duration.get_length()
-
- def get_properties(self):
- return ("'duration %s"
- % self.duration.lisp_expression())
-
-
-class RestEvent (RhythmicEvent):
- def __init__(self):
- RhythmicEvent.__init__(self)
- self.pitch = None
-
- def ly_expression(self):
- res = self.ly_expression_pre_note(False)
- if self.pitch:
- return res + "%s%s\\rest" % (self.pitch.ly_expression(), self.duration.ly_expression())
- else:
- return 'r%s' % self.duration.ly_expression()
-
- def print_ly(self, printer):
- for ev in self.associated_events:
- ev.print_ly(printer)
-# if hasattr(self, 'color'):
-# printer.print_note_color("NoteHead", self.color)
-# printer.print_note_color("Stem", self.color)
-# printer.print_note_color("Beam", self.color)
- if self.pitch:
- self.pitch.print_ly(printer)
- self.duration.print_ly(printer)
- printer('\\rest')
- else:
- printer('r')
- self.duration.print_ly(printer)
-
-
-class SkipEvent (RhythmicEvent):
- def ly_expression(self):
- return 's%s' % self.duration.ly_expression()
-
-
-class NoteEvent(RhythmicEvent):
- def __init__(self):
- RhythmicEvent.__init__(self)
- self.pitch = Pitch()
- self.cautionary = False
- self.forced_accidental = False
-
- def get_properties(self):
- s = RhythmicEvent.get_properties(self)
-
- if self.pitch:
- s += self.pitch.lisp_expression()
-
- return s
-
- def pitch_mods(self):
- excl_question = ''
- if self.cautionary:
- excl_question += '?'
- if self.forced_accidental:
- excl_question += '!'
-
- return excl_question
-
- def ly_expression(self):
- # obtain all stuff that needs to be printed before the note:
- res = self.ly_expression_pre_note(True)
- if self.pitch:
- return res + '%s%s%s' % (self.pitch.ly_expression(),
- self.pitch_mods(),
- self.duration.ly_expression())
-
- def chord_element_ly(self):
- # obtain all stuff that needs to be printed before the note:
- res = self.ly_expression_pre_note(True)
- if self.pitch:
- return res + '%s%s' % (self.pitch.ly_expression(),
- self.pitch_mods())
-
- def print_ly(self, printer):
- for ev in self.associated_events:
- ev.print_ly(printer)
- if hasattr(self, 'color'):
- printer.print_note_color("NoteHead", self.color)
- printer.print_note_color("Stem", self.color)
- printer.print_note_color("Beam", self.color)
-
- if hasattr(self, "pitch"):
- self.pitch.print_ly(printer)
- printer(self.pitch_mods())
-
- self.duration.print_ly(printer)
-
-# if hasattr(self, 'color'):
-# printer.print_note_color("NoteHead")
-# printer.print_note_color("Stem")
-# printer.print_note_color("Beam")
-
-
-class KeySignatureChange (Music):
- def __init__(self):
- Music.__init__(self)
- self.tonic = None
- self.mode = 'major'
- self.non_standard_alterations = None
-
- def format_non_standard_alteration(self, a):
- alter_dict = {-2: ",DOUBLE-FLAT",
- - 1.5: ",THREE-Q-FLAT",
- - 1: ",FLAT",
- - 0.5: ",SEMI-FLAT",
- 0: ",NATURAL",
- 0.5: ",SEMI-SHARP",
- 1: ",SHARP",
- 1.5: ",THREE-Q-SHARP",
- 2: ",DOUBLE-SHARP"}
- try:
- accidental = alter_dict[a[1]]
- except KeyError:
- ly.warning(
- _("Unable to convert alteration %s to a lilypond expression") % a[1])
- return ''
- if len(a) == 2:
- return "( %s . %s )" % (a[0], accidental)
- elif len(a) == 3:
- return "(( %s . %s ) . %s )" % (a[2], a[0], accidental)
- else:
- return ''
-
- def ly_expression(self):
- if self.tonic:
- return '\\key %s \\%s' % (self.tonic.ly_step_expression(),
- self.mode)
- elif self.non_standard_alterations:
- alterations = [self.format_non_standard_alteration(a) for
- a in self.non_standard_alterations]
- return "\\set Staff.keyAlterations = #`(%s)" % " ".join(alterations)
- else:
- return ''
-
-
-class ShiftDurations (MusicWrapper):
- def __init__(self):
- MusicWrapper.__init__(self)
- self.params = [0, 0]
-
- def set_shift_durations_parameters(self, timeSigChange):
- self.params = timeSigChange.get_shift_durations_parameters()
-
- def print_ly(self, func):
- func(' \\shiftDurations #%d #%d ' % tuple(self.params))
- MusicWrapper.print_ly(self, func)
-
-
-class TimeSignatureChange (Music):
- def __init__(self):
- Music.__init__(self)
- self.fractions = [4, 4]
- self.style = None
- # Used for the --time-signature option of musicxml2ly
- self.originalFractions = [4, 4]
- self.visible = True
-
- def get_fractions_ratio(self):
- """
- Calculate the ratio between the original time fraction and the new one.
- Used for the "--time-signature" option.
-
- @return: The ratio between the two time fractions.
- @rtype: float
- """
- return (float(self.originalFractions[0])/self.originalFractions[1])*(float(self.fractions[1])/self.fractions[0])
-
- def get_shift_durations_parameters(self):
- dur = math.ceil(math.log(self.get_fractions_ratio(), 2))
- dots = (1/self.get_fractions_ratio())/(math.pow(2, -dur))
- dots = int(math.log(2-dots, 0.5))
- return [dur, dots]
-
- def format_fraction(self, frac):
- if isinstance(frac, list):
- l = [self.format_fraction(f) for f in frac]
- return "(" + " ".join(l) + ")"
- else:
- return "%s" % frac
-
- def ly_expression(self):
- st = ''
- # Print out the style if we have ome, but the '() should only be
- # forced for 2/2 or 4/4, since in all other cases we'll get numeric
- # signatures anyway despite the default 'C signature style!
- is_common_signature = self.fractions in ([2, 2], [4, 4], [4, 2])
- if self.style and self.visible:
- if self.style == "common":
- st = "\\defaultTimeSignature"
- elif self.style != "'()":
- st = "\\once \\override Staff.TimeSignature.style = #%s " % self.style
- elif (self.style != "'()") or is_common_signature:
- st = "\\numericTimeSignature"
-
- if self.visible:
- omit = ''
- else:
- omit = r'\omit Staff.TimeSignature'
-
- # Easy case: self.fractions = [n,d] => normal \time n/d call:
- if len(self.fractions) == 2 and isinstance(self.fractions[0], int):
- return st + '\\time %d/%d ' % tuple(self.fractions) + omit
- elif self.fractions:
- return st + "\\compoundMeter #'%s" % self.format_fraction(self.fractions) + omit
- else:
- return st + ''
-
-
-class ClefChange (Music):
- def __init__(self):
- Music.__init__(self)
- self.type = 'G'
- self.position = 2
- self.octave = 0
-
- def octave_modifier(self):
- return {1: "^8", 2: "^15", -1: "_8", -2: "_15"}.get(self.octave, '')
-
- def clef_name(self):
- return {('G', 2): "treble",
- ('G', 1): "french",
- ('C', 1): "soprano",
- ('C', 2): "mezzosoprano",
- ('C', 3): "alto",
- ('C', 4): "tenor",
- ('C', 5): "baritone",
- ('F', 3): "varbaritone",
- ('F', 4): "bass",
- ('F', 5): "subbass",
- ("percussion", 2): "percussion",
- # Workaround: MuseScore uses PERC instead of percussion
- ("PERC", 2): "percussion",
- ("TAB", 5): get_tab_clef()}.get((self.type, self.position), None)
-
- def ly_expression(self):
- return '\\clef "%s%s"' % (self.clef_name(), self.octave_modifier())
-
- clef_dict = {
- "G": ("clefs.G", -2, -6),
- "C": ("clefs.C", 0, 0),
- "F": ("clefs.F", 2, 6),
- }
-
- def lisp_expression(self):
- try:
- (glyph, pos, c0) = self.clef_dict[self.type]
- except KeyError:
- return ""
- clefsetting = """
- (make-music 'SequentialMusic
- 'elements (list
- (context-spec-music
- (make-property-set 'clefGlyph "%s") 'Staff)
- (context-spec-music
- (make-property-set 'clefPosition %d) 'Staff)
- (context-spec-music
- (make-property-set 'middleCPosition %d) 'Staff)))
-""" % (glyph, pos, c0)
- return clefsetting
-
-
-class Transposition (Music):
- def __init__(self):
- Music.__init__(self)
- self.pitch = None
-
- def ly_expression(self):
- self.pitch._force_absolute_pitch = True
- return '\\transposition %s' % self.pitch.ly_expression()
-
-
-class StaffChange (Music):
- def __init__(self, staff):
- Music.__init__(self)
- self.staff = staff
-
- def ly_expression(self):
- if self.staff:
- return "\\change Staff=\"%s\"" % self.staff
- else:
- return ''
-
-
-class SetEvent (Music):
- def __init__(self, contextprop, value):
- Music.__init__(self)
- self.context_prop = contextprop
- self.value = value
-
- def ly_expression(self):
- if self.value:
- return "\\set %s = %s" % (self.context_prop, self.value)
- else:
- return ''
-
-
-class StaffLinesEvent (Music):
- def __init__(self, lines):
- Music.__init__(self)
- self.lines = lines
-
- def ly_expression(self):
- if self.lines > 0:
- return "\\stopStaff \\override Staff.StaffSymbol.line-count = #%s \\startStaff" % self.lines
- else:
- return "\\stopStaff \\revert Staff.StaffSymbol.line-count \\startStaff"
-
-
-class TempoMark (Music):
- def __init__(self):
- Music.__init__(self)
- self.baseduration = None
- self.newduration = None
- self.beats = None
- self.parentheses = False
- self.text = None
-
- def set_base_duration(self, dur):
- self.baseduration = dur
-
- def set_new_duration(self, dur):
- self.newduration = dur
-
- def set_beats_per_minute(self, beats):
- self.beats = beats
-
- def set_parentheses(self, parentheses):
- self.parentheses = parentheses
-
- def set_text(self, text):
- self.text = text
-
- def wait_for_note(self):
- return False
-
- def duration_to_markup(self, dur):
- if dur:
- # Generate the markup to print the note
- return "\\general-align #Y #DOWN \\smaller \\note {%s} #UP" % dur.ly_expression()
- else:
- return ''
-
- def tempo_markup_template(self):
- return "\\mark\\markup { \\fontsize #-2 \\line { %s } }"
-
- def ly_expression(self):
- res = ''
- if not self.baseduration:
- return res
- if self.beats:
- if self.parentheses or self.text:
- res += "\\tempo \"%s\" %s=%s" % (self.text or '',
- self.baseduration.ly_expression(), self.beats)
- else:
- res += "\\tempo %s=%s" % (
- self.baseduration.ly_expression(), self.beats)
- elif self.newduration:
- dm = self.duration_to_markup(self.baseduration)
- ndm = self.duration_to_markup(self.newduration)
- if self.parentheses:
- contents = "\"(\" %s = %s \")\"" % (dm, ndm)
- else:
- contents = " %s = %s " % (dm, ndm)
- res += self.tempo_markup_template() % contents
- else:
- return ''
- return res
-
-
-class FiguredBassNote (Music):
- def __init__(self):
- Music.__init__(self)
- self.number = ''
- self.prefix = ''
- self.suffix = ''
-
- def set_prefix(self, prefix):
- self.prefix = prefix
-
- def set_suffix(self, suffix):
- self.prefix = suffix
-
- def set_number(self, number):
- self.number = number
-
- def ly_expression(self):
- res = ''
- if self.number:
- res += self.number
- else:
- res += '_'
- if self.prefix:
- res += self.prefix
- if self.suffix:
- res += self.suffix
- return res
-
-
-class FiguredBassEvent (NestedMusic):
- def __init__(self):
- NestedMusic.__init__(self)
- self.duration = None
- self.real_duration = 0
- self.parentheses = False
- return
-
- def set_duration(self, dur):
- self.duration = dur
-
- def set_parentheses(self, par):
- self.parentheses = par
-
- def set_real_duration(self, dur):
- self.real_duration = dur
-
- def print_ly(self, printer):
- figured_bass_events = [e for e in self.elements if
- isinstance(e, FiguredBassNote)]
- if figured_bass_events:
- notes = []
- for x in figured_bass_events:
- notes.append(x.ly_expression())
- contents = ' '.join(notes)
- if self.parentheses:
- contents = '[%s]' % contents
- printer('<%s>' % contents)
- self.duration.print_ly(printer)
-
-
-class MultiMeasureRest(Music):
-
- def lisp_expression(self):
- return """
-(make-music
- 'MultiMeasureRestMusicGroup
- 'elements
- (list (make-music (quote BarCheck))
- (make-music
- 'ChordEvent
- 'elements
- (list (make-music
- 'MultiMeasureRestEvent
- 'duration
- %s)))
- (make-music (quote BarCheck))))
-""" % self.duration.lisp_expression()
-
- def ly_expression(self):
- return 'R%s' % self.duration.ly_expression()
-
-
-class Break (Music):
- def __init__(self, tp="break"):
- Music.__init__(self)
- self.type = tp
-
- def print_ly(self, printer):
- if self.type:
- printer.dump("\\%s" % self.type)
-
-
-class StaffGroup:
- def __init__(self, command="StaffGroup"):
- self.stafftype = command
- self.id = None
- self.instrument_name = None
- self.sound = None
- self.short_instrument_name = None
- self.symbol = None
- self.spanbar = None
- self.children = []
- self.is_group = True
- self.context_modifications = []
- # part_information is a list with entries of the form
- # [staffid, voicelist]
- # where voicelist is a list with entries of the form
- # [voiceid1, [lyricsid11, lyricsid12,...] ]
- self.part_information = None
-
- def append_staff(self, staff):
- self.children.append(staff)
-
- def set_part_information(self, part_name, staves_info):
- if part_name == self.id:
- self.part_information = staves_info
- else:
- for c in self.children:
- if hasattr(c, 'set_part_information'):
- c.set_part_information(part_name, staves_info)
-
- def add_context_modification(self, modification):
- self.context_modifications.append(modification)
-
- def print_ly_contents(self, printer):
- for c in self.children:
- if c:
- c.print_ly(printer)
- # Intention: I want to put the content of new StaffGroup in angled brackets (<< >>)
- # printer.dump ("test")# test is printed twice at the end of a staffgroup with two staves.
- # printer ("test") # test is printed twice at the end of a staffgroup with two staves.
-
- def needs_with(self):
- needs_with = False
- needs_with |= self.spanbar == "no"
- needs_with |= self.instrument_name is not None
- needs_with |= self.short_instrument_name is not None
- needs_with |= (self.symbol is not None) and (self.symbol != "bracket")
- return needs_with
-
- def print_ly_context_mods(self, printer):
- if self.instrument_name or self.short_instrument_name:
- printer.dump("\\consists \"Instrument_name_engraver\"")
- if self.spanbar == "no":
- printer.dump("\\hide SpanBar")
- brack = {"brace": "SystemStartBrace",
- "none": "SystemStartBar",
- "line": "SystemStartSquare"}.get(self.symbol, None)
- if brack:
- printer.dump("systemStartDelimiter = #'%s" % brack)
-
- def print_ly_overrides(self, printer):
- needs_with = self.needs_with() | (len(self.context_modifications) > 0)
- if needs_with:
- printer.dump("\\with {")
- self.print_ly_context_mods(printer)
- for m in self.context_modifications:
- printer.dump(m)
- printer.dump("}")
- printer.newline()
- # print a single << after StaffGroup only when the with-block is not needed.
- # This doesn't work. << is printed before and after StaffGroup!
- # else:
- # printer.dump (" <<")
- # prints loads off << before and after StaffGroup and before \set Staff.instrumentName
- # elif not needs_with:
- # printer.dump (" <<")
-
- def print_chords(self, printer):
- try:
- for [staff_id, voices] in self.part_information:
- for [v, lyrics, figuredbass, chordnames, fretboards] in voices:
- if chordnames:
- printer(r'\context ChordNames = "%s" {%s \%s}' % (
- chordnames, get_transpose("string"), chordnames))
- printer.newline()
- except TypeError:
- return
-
- def print_fretboards(self, printer):
- try:
- for [staff_id, voices] in self.part_information:
- for [v, lyrics, figuredbass, chordnames, fretboards] in voices:
- if fretboards:
- printer(r'\context FretBoards = "%s" {%s \%s}' % (
- fretboards, get_transpose("string"), fretboards))
- printer.newline()
- except TypeError:
- return
-
- def print_ly(self, printer):
- self.print_chords(printer)
- self.print_fretboards(printer)
- if self.stafftype:
- printer.dump("\\new %s" % self.stafftype)
- self.print_ly_overrides(printer)
- printer.newline()
- if self.stafftype:
- printer.dump("<<")
- printer.newline()
- if self.stafftype and self.instrument_name:
- printer.dump("\\set %s.instrumentName = %s" % (self.stafftype,
- escape_instrument_string(self.instrument_name)))
- printer.newline()
- if self.stafftype and self.short_instrument_name:
- printer.dump("\\set %s.shortInstrumentName = %s" % (self.stafftype,
- escape_instrument_string(self.short_instrument_name)))
- printer.newline()
- if self.sound:
- printer.dump(r'\set %s.midiInstrument = "%s"' %
- (self.stafftype, self.sound))
- printer.newline()
- self.print_ly_contents(printer)
- printer.newline()
- if self.stafftype:
- printer.dump(">>")
- printer.newline()
-
-
-class Staff (StaffGroup):
- def __init__(self, command="Staff"):
- StaffGroup.__init__(self, command)
- self.is_group = False
- self.part = None
- self.voice_command = "Voice"
- self.substafftype = None
- self.sound = None
-
- def needs_with(self):
- return False
-
- def print_ly_context_mods(self, printer):
- # printer.dump ("test") #does nothing.
- pass
-
- def print_ly_contents(self, printer):
- if not self.id or not self.part_information:
- return
- sub_staff_type = self.substafftype
- if not sub_staff_type:
- sub_staff_type = self.stafftype
- # printer.dump ("test") #prints test in each staff after the definitions of the instrument name and before the definition of the contexts.
- printer.newline()
-
- for [staff_id, voices] in self.part_information:
- # now comes the real staff definition:
- if staff_id:
- printer('\\context %s = "%s" << ' % (sub_staff_type, staff_id))
- else:
- printer('\\context %s << ' % sub_staff_type)
- printer.newline()
- printer.dump(r"\mergeDifferentlyDottedOn\mergeDifferentlyHeadedOn")
- printer.newline()
- n = 0
- nr_voices = len(voices)
- for [v, lyrics, figuredbass, chordnames, fretboards] in voices:
- n += 1
- voice_count_text = ''
- if nr_voices > 1:
- """
-The next line contains a bug: The voices might not appear in numerical order! Some voices might be missing e.g. if the xml file contains only voice one, three and four, this would result in: \voiceOne, \voiceTwo and \voiceThree. This causes wrong stem directions and collisions.
- """
- voice_count_text = {
- 1: ' \\voiceOne', 2: ' \\voiceTwo', 3: ' \\voiceThree'}.get(n, ' \\voiceFour')
- printer('\\context %s = "%s" {%s %s \\%s }' % (
- self.voice_command, v, get_transpose("string"), voice_count_text, v))
- printer.newline()
- lyrics_id = 1
- for l in lyrics:
- printer('\\new Lyrics \\lyricsto "%s" { \\set stanza = "%s." \\%s }' % (
- v, lyrics_id, l))
- lyrics_id += 1
- printer.newline()
- if figuredbass:
- printer(r'\context FiguredBass = "%s" \%s' %
- (figuredbass, figuredbass))
- printer('>>')
- # printer.dump ("test") #prints test after each definition of a context.
- #printer.newline ()
- # printer.dump ("test") #prints test after each definition of a context.
-
- def print_ly(self, printer):
- if self.part_information and len(self.part_information) > 1:
- self.stafftype = "PianoStaff"
- self.substafftype = "Staff"
- #printer.dump ('test')
- StaffGroup.print_ly(self, printer)
-
-
-class TabStaff (Staff):
- def __init__(self, command="TabStaff"):
- Staff.__init__(self, command)
- self.string_tunings = []
- self.tablature_format = None
- self.voice_command = "TabVoice"
-
- def print_ly_overrides(self, printer):
- if self.string_tunings or self.tablature_format:
- printer.dump("\\with {")
- if self.string_tunings:
- printer.dump("stringTunings = #`(")
- for i in self.string_tunings:
- printer.dump(",%s" % i.lisp_expression())
- printer.dump(")")
- if self.tablature_format:
- printer.dump("tablatureFormat = #%s" % self.tablature_format)
- printer.dump("}")
-
-
-class DrumStaff (Staff):
- def __init__(self, command="DrumStaff"):
- Staff.__init__(self, command)
- self.drum_style_table = None
- self.voice_command = "DrumVoice"
-
- def print_ly_overrides(self, printer):
- if self.drum_style_table:
- printer.dump(r"\with {")
- printer.dump("drumStyleTable = #%s" % self.drum_style_table)
- printer.dump("}")
-
-
-class RhythmicStaff (Staff):
- def __init__(self, command="RhythmicStaff"):
- Staff.__init__(self, command)
-
-# Test
-# def print_staffgroup_closing_brackets (self, printer): #test see class Score / class Staff
-# printer.dump ("test")
-
-
-class Score:
- def __init__(self):
- """
- Constructs a new Score object.
- """
- self.contents = None
- self.create_midi = False
-
- def set_contents(self, contents):
- self.contents = contents
-
- def set_part_information(self, part_id, staves_info):
- if self.contents:
- self.contents.set_part_information(part_id, staves_info)
-
- def set_tempo(self, tempo):
- """
- Set the tempo attribute of the Score.
- This attribute can be used in L{print_ly} for the midi output (see L{musicxml.Sound}).
-
- @param tempo: The value of the tempo, in beats per minute.
- @type tempo: String
- """
- self.tempo = tempo
- # Test
-# def print_staffgroup_closing_brackets (self, printer): #test see class Score / class Staff
-# printer.dump ("test")
-
- def print_ly(self, printer):
- """
- Print the content of the score to the printer, in lilypond format.
-
- @param printer: A printer given to display correctly the output.
- @type printer: L{Output_printer}
- """
- self.create_midi = get_create_midi()
- printer.dump("\\score {")
- printer.newline()
- # prints opening <<:
- printer.dump('<<')
- printer.newline()
- if self.contents:
- self.contents.print_ly(printer)
- # printer.dump ("test") prints test once before the >> of the score block, independent of the existence of a staffgroup.
- # if StaffGroup == False: # True or False: nothing happens.
- # printer.dump ('>>')
- printer.dump('>>')
- printer.newline()
- # StaffGroup.print_staffgroup_closing_brackets(self, printer) #TypeError: unbound method print_staffgroup_closing_brackets() must be called with StaffGroup instance as first argument (got Score instance instead)
- # print_staffgroup_closing_brackets(self, printer) #NameError: global name 'print_staffgroup_closing_brackets' is not defined. prints test once before the >> of the score block, independent of the existence of a staffgroup.
- printer.dump("\\layout {}")
- printer.newline()
- # If the --midi option was not passed to musicxml2ly, that comments the "midi" line
- if self.create_midi:
- printer.dump("}")
- printer.newline()
- printer.dump("\\score {")
- printer.newline()
- printer.dump("\\unfoldRepeats \\articulate {")
- printer.newline()
- self.contents.print_ly(printer)
- printer.dump("}")
- printer.newline()
- else:
- printer.dump(
- "% To create MIDI output, uncomment the following line:")
- printer.newline()
- printer.dump("% ")
- printer.dump("\\midi {\\tempo 4 = "+self.tempo+" }")
- printer.newline()
- printer.dump("}")
- printer.newline()
-
-
-def test_pitch():
- bflat = Pitch()
- bflat.alteration = -1
- bflat.step = 6
- bflat.octave = -1
- fifth = Pitch()
- fifth.step = 4
- down = Pitch()
- down.step = -4
- down.normalize()
-
- print(bflat.semitones())
- print(bflat.transposed(fifth), bflat.transposed(fifth).transposed(fifth))
- print(bflat.transposed(fifth).transposed(fifth).transposed(fifth))
-
- print(bflat.semitones(), 'down')
- print(bflat.transposed(down))
- print(bflat.transposed(down).transposed(down))
- print(bflat.transposed(down).transposed(down).transposed(down))
-
-
-def test_printer():
- def make_note():
- evc = ChordEvent()
- n = NoteEvent()
- evc.append(n)
- return n
-
- def make_tup():
- m = SequentialMusic()
- m.append(make_note())
- m.append(make_note())
- m.append(make_note())
-
- t = TimeScaledMusic()
- t.numerator = 2
- t.denominator = 3
- t.element = m
- return t
-
- m = SequentialMusic()
- m.append(make_tup())
- m.append(make_tup())
- m.append(make_tup())
-
- printer = Output_printer()
- m.print_ly(printer)
- printer.newline()
-
-
-def test_expr():
- m = SequentialMusic()
- l = 2
- evc = ChordEvent()
- n = NoteEvent()
- n.duration.duration_log = l
- n.pitch.step = 1
- evc.insert_around(None, n, 0)
- m.insert_around(None, evc, 0)
-
- evc = ChordEvent()
- n = NoteEvent()
- n.duration.duration_log = l
- n.pitch.step = 3
- evc.insert_around(None, n, 0)
- m.insert_around(None, evc, 0)
-
- evc = ChordEvent()
- n = NoteEvent()
- n.duration.duration_log = l
- n.pitch.step = 2
- evc.insert_around(None, n, 0)
- m.insert_around(None, evc, 0)
-
- evc = ClefChange()
- evc.type = 'treble'
- m.insert_around(None, evc, 0)
-
- evc = ChordEvent()
- tonic = Pitch()
- tonic.step = 2
- tonic.alteration = -2
- n = KeySignatureChange()
- n.tonic = tonic.copy()
- n.scale = [0, 0, -2, 0, 0, -2, -2]
-
- evc.insert_around(None, n, 0)
- m.insert_around(None, evc, 0)
-
- return m
-
-
-if __name__ == '__main__':
- test_printer()
- test_pitch()
-
- expr = test_expr()
- expr.set_start(Fraction(0))
- expr.print_ly(Output_printer())
- start = Fraction(0, 4)
- stop = Fraction(4, 2)
-
- def sub(x, start=start, stop=stop):
- ok = x.start >= start and x.start + x.get_length() <= stop
- return ok
-
- print(expr.lisp_sub_expression(sub))
diff --git a/spaces/PeepDaSlan9/whisper-web/assets/worker-73961048.js b/spaces/PeepDaSlan9/whisper-web/assets/worker-73961048.js
deleted file mode 100644
index 33f27ad69eb417003f45ccb9883247b19c9e2003..0000000000000000000000000000000000000000
--- a/spaces/PeepDaSlan9/whisper-web/assets/worker-73961048.js
+++ /dev/null
@@ -1,1790 +0,0 @@
-var pn=Object.defineProperty;var gn=(nt,y,n)=>y in nt?pn(nt,y,{enumerable:!0,configurable:!0,writable:!0,value:n}):nt[y]=n;var le=(nt,y,n)=>(gn(nt,typeof y!="symbol"?y+"":y,n),n);(function(){var nt;"use strict";function _mergeNamespaces(y,n){return n.forEach(function(o){o&&typeof o!="string"&&!Array.isArray(o)&&Object.keys(o).forEach(function(l){if(l!=="default"&&!(l in y)){var c=Object.getOwnPropertyDescriptor(o,l);Object.defineProperty(y,l,c.get?c:{enumerable:!0,get:function(){return o[l]}})}})}),Object.freeze(y)}function dispatchCallback(y,n){y!==null&&y(n)}function reverseDictionary(y){return Object.fromEntries(Object.entries(y).map(([n,o])=>[o,n]))}function escapeRegExp(y){return y.replace(/[.*+?^${}()|[\]\\]/g,"\\$&")}const Callable=class{constructor(){let y=function(...n){return y._call(...n)};return Object.setPrototypeOf(y,new.target.prototype)}_call(...y){throw Error("Must implement _call method in subclass")}};function isString(y){return typeof y=="string"||y instanceof String}function isTypedArray(y){var n,o,l;return((l=(o=(n=y==null?void 0:y.prototype)==null?void 0:n.__proto__)==null?void 0:o.constructor)==null?void 0:l.name)==="TypedArray"}function isIntegralNumber(y){return Number.isInteger(y)||typeof y=="bigint"}function exists(y){return y!=null}function calculateDimensions(y){const n=[];let o=y;for(;Array.isArray(o);)n.push(o.length),o=o[0];return n}function pop(y,n,o=void 0){const l=y[n];if(l!==void 0)return delete y[n],l;if(o===void 0)throw Error(`Key ${n} does not exist in object.`);return o}var fs={},ONNX_NODE=Object.freeze({__proto__:null,default:fs});function getDefaultExportFromCjs(y){return y&&y.__esModule&&Object.prototype.hasOwnProperty.call(y,"default")?y.default:y}function getAugmentedNamespace(y){if(y.__esModule)return y;var n=y.default;if(typeof n=="function"){var o=function l(){if(this instanceof l){var c=[null];c.push.apply(c,arguments);var f=Function.bind.apply(n,c);return new f}return n.apply(this,arguments)};o.prototype=n.prototype}else o={};return Object.defineProperty(o,"__esModule",{value:!0}),Object.keys(y).forEach(function(l){var c=Object.getOwnPropertyDescriptor(y,l);Object.defineProperty(o,l,c.get?c:{enumerable:!0,get:function(){return y[l]}})}),o}var ortWeb_min$1={exports:{}};const backends={},backendsSortedByPriority=[],registerBackend=(y,n,o)=>{if(n&&typeof n.init=="function"&&typeof n.createSessionHandler=="function"){const l=backends[y];if(l===void 0)backends[y]={backend:n,priority:o};else{if(l.priority>o)return;if(l.priority===o&&l.backend!==n)throw new Error(`cannot register backend "${y}" using priority ${o}`)}if(o>=0){const c=backendsSortedByPriority.indexOf(y);c!==-1&&backendsSortedByPriority.splice(c,1);for(let f=0;f{const n=y.length===0?backendsSortedByPriority:y,o=[];for(const l of n){const c=backends[l];if(c){if(c.initialized)return c.backend;if(c.aborted)continue;const f=!!c.initPromise;try{return f||(c.initPromise=c.backend.init()),await c.initPromise,c.initialized=!0,c.backend}catch(a){f||o.push({name:l,err:a}),c.aborted=!0}finally{delete c.initPromise}}}throw new Error(`no available backend found. ERR: ${o.map(l=>`[${l.name}] ${l.err}`).join(", ")}`)};class EnvImpl{constructor(){this.wasm={},this.webgl={},this.logLevelInternal="warning"}set logLevel(n){if(n!==void 0){if(typeof n!="string"||["verbose","info","warning","error","fatal"].indexOf(n)===-1)throw new Error(`Unsupported logging level: ${n}`);this.logLevelInternal=n}}get logLevel(){return this.logLevelInternal}}const env$1=new EnvImpl,isBigInt64ArrayAvailable=typeof BigInt64Array<"u"&&typeof BigInt64Array.from=="function",isBigUint64ArrayAvailable=typeof BigUint64Array<"u"&&typeof BigUint64Array.from=="function",NUMERIC_TENSOR_TYPE_TO_TYPEDARRAY_MAP=new Map([["float32",Float32Array],["uint8",Uint8Array],["int8",Int8Array],["uint16",Uint16Array],["int16",Int16Array],["int32",Int32Array],["bool",Uint8Array],["float64",Float64Array],["uint32",Uint32Array]]),NUMERIC_TENSOR_TYPEDARRAY_TO_TYPE_MAP=new Map([[Float32Array,"float32"],[Uint8Array,"uint8"],[Int8Array,"int8"],[Uint16Array,"uint16"],[Int16Array,"int16"],[Int32Array,"int32"],[Float64Array,"float64"],[Uint32Array,"uint32"]]);isBigInt64ArrayAvailable&&(NUMERIC_TENSOR_TYPE_TO_TYPEDARRAY_MAP.set("int64",BigInt64Array),NUMERIC_TENSOR_TYPEDARRAY_TO_TYPE_MAP.set(BigInt64Array,"int64")),isBigUint64ArrayAvailable&&(NUMERIC_TENSOR_TYPE_TO_TYPEDARRAY_MAP.set("uint64",BigUint64Array),NUMERIC_TENSOR_TYPEDARRAY_TO_TYPE_MAP.set(BigUint64Array,"uint64"));const calculateSize=y=>{let n=1;for(let o=0;o{const t=document.createElement("canvas"),e=t.getContext("2d");if(!n||!e)return s();const r=new Image;r.crossOrigin="Anonymous",r.src=n,r.onload=()=>{t.width=r.width,t.height=r.height,e.drawImage(r,0,0,t.width,t.height);const i=e.getImageData(0,0,t.width,t.height);if(o!==void 0){if(o.height!==void 0&&o.height!==t.height)throw new Error("Image input config height doesn't match ImageBitmap height");if(p.height=t.height,o.width!==void 0&&o.width!==t.width)throw new Error("Image input config width doesn't match ImageBitmap width");p.width=t.width}else p.height=t.height,p.width=t.width;u(ut.bufferToTensor(i.data,p))}});throw new Error("Input data provided is not supported - aborted tensor creation")}if(h!==void 0)return ut.bufferToTensor(h,p);throw new Error("Input data provided is not supported - aborted tensor creation")}toImageData(n){var o,l;const c=document.createElement("canvas").getContext("2d");let f;if(c!=null){const a=this.dims[3],h=this.dims[2],p=this.dims[1],u=n!==void 0&&n.format!==void 0?n.format:"RGB",s=n!==void 0&&((o=n.norm)===null||o===void 0?void 0:o.mean)!==void 0?n.norm.mean:255,t=n!==void 0&&((l=n.norm)===null||l===void 0?void 0:l.bias)!==void 0?n.norm.bias:0,e=h*a;if(n!==void 0){if(n.height!==void 0&&n.height!==h)throw new Error("Image output config height doesn't match tensor height");if(n.width!==void 0&&n.width!==a)throw new Error("Image output config width doesn't match tensor width");if(n.format!==void 0&&p===4&&n.format!=="RGBA"||p===3&&n.format!=="RGB"&&n.format!=="BGR")throw new Error("Tensor format doesn't match input tensor dims")}const r=4;let i=0,d=1,g=2,m=3,b=0,_=e,v=e*2,w=-1;u==="RGBA"?(b=0,_=e,v=e*2,w=e*3):u==="RGB"?(b=0,_=e,v=e*2):u==="RBG"&&(b=0,v=e,_=e*2),f=c.createImageData(a,h);for(let S=0;S"u")throw new Error(`input '${u}' is missing in 'feeds'.`);if(a)for(const u of this.outputNames)c[u]=null;const h=await this.handler.run(n,c,f),p={};for(const u in h)Object.hasOwnProperty.call(h,u)&&(p[u]=new Tensor$1(h[u].type,h[u].data,h[u].dims));return p}static async create(n,o,l,c){let f,a={};if(typeof n=="string"){if(f=n,typeof o=="object"&&o!==null)a=o;else if(typeof o<"u")throw new TypeError("'options' must be an object.")}else if(n instanceof Uint8Array){if(f=n,typeof o=="object"&&o!==null)a=o;else if(typeof o<"u")throw new TypeError("'options' must be an object.")}else if(n instanceof ArrayBuffer||typeof SharedArrayBuffer<"u"&&n instanceof SharedArrayBuffer){const t=n;let e=0,r=n.byteLength;if(typeof o=="object"&&o!==null)a=o;else if(typeof o=="number"){if(e=o,!Number.isSafeInteger(e))throw new RangeError("'byteOffset' must be an integer.");if(e<0||e>=t.byteLength)throw new RangeError(`'byteOffset' is out of range [0, ${t.byteLength}).`);if(r=n.byteLength-e,typeof l=="number"){if(r=l,!Number.isSafeInteger(r))throw new RangeError("'byteLength' must be an integer.");if(r<=0||e+r>t.byteLength)throw new RangeError(`'byteLength' is out of range (0, ${t.byteLength-e}].`);if(typeof c=="object"&&c!==null)a=c;else if(typeof c<"u")throw new TypeError("'options' must be an object.")}else if(typeof l<"u")throw new TypeError("'byteLength' must be a number.")}else if(typeof o<"u")throw new TypeError("'options' must be an object.");f=new Uint8Array(t,e,r)}else throw new TypeError("Unexpected argument[0]: must be 'path' or 'buffer'.");const p=(a.executionProviders||[]).map(t=>typeof t=="string"?t:t.name),s=await(await resolveBackend(p)).createSessionHandler(f,a);return new dn(s)}startProfiling(){this.handler.startProfiling()}endProfiling(){this.handler.endProfiling()}get inputNames(){return this.handler.inputNames}get outputNames(){return this.handler.outputNames}};const InferenceSession$1=InferenceSession$2;var lib=Object.freeze({__proto__:null,InferenceSession:InferenceSession$1,Tensor:Tensor$1,env:env$1,registerBackend}),require$$0=getAugmentedNamespace(lib);/*!
-* ONNX Runtime Web v1.14.0
-* Copyright (c) Microsoft Corporation. All rights reserved.
-* Licensed under the MIT License.
-*/(function(module,exports){(function(y,n){module.exports=n(require$$0)})(self,__WEBPACK_EXTERNAL_MODULE__1670__=>(()=>{var __webpack_modules__={3474:(y,n,o)=>{var l,c=(l=(l=typeof document<"u"&&document.currentScript?document.currentScript.src:void 0)||"/index.js",function(f){function a(){return X.buffer!=ee&&Ee(X.buffer),ue}function h(){return X.buffer!=ee&&Ee(X.buffer),Ae}function p(){return X.buffer!=ee&&Ee(X.buffer),xe}function u(){return X.buffer!=ee&&Ee(X.buffer),oe}function s(){return X.buffer!=ee&&Ee(X.buffer),we}var t,e,r;f=f||{},t||(t=f!==void 0?f:{}),t.ready=new Promise(function(T,E){e=T,r=E});var i,d,g,m,b,_,v=Object.assign({},t),w="./this.program",S=(T,E)=>{throw E},A=typeof window=="object",O=typeof importScripts=="function",x=typeof process=="object"&&typeof process.versions=="object"&&typeof process.versions.node=="string",I=t.ENVIRONMENT_IS_PTHREAD||!1,$="";function B(T){return t.locateFile?t.locateFile(T,$):$+T}if(x){let T;$=O?o(908).dirname($)+"/":"//",_=()=>{b||(m=o(1384),b=o(908))},i=function(E,k){return _(),E=b.normalize(E),m.readFileSync(E,k?void 0:"utf8")},g=E=>((E=i(E,!0)).buffer||(E=new Uint8Array(E)),E),d=(E,k,C)=>{_(),E=b.normalize(E),m.readFile(E,function(z,V){z?C(z):k(V.buffer)})},1{if(qe())throw process.exitCode=E,k;k instanceof Je||j("exiting due to exception: "+k),process.exit(E)},t.inspect=function(){return"[Emscripten Module object]"};try{T=o(9925)}catch(E){throw console.error('The "worker_threads" module is not supported in this node.js build - perhaps a newer version is needed?'),E}o.g.Worker=T.Worker}else(A||O)&&(O?$=self.location.href:typeof document<"u"&&document.currentScript&&($=document.currentScript.src),l&&($=l),$=$.indexOf("blob:")!==0?$.substr(0,$.replace(/[?#].*/,"").lastIndexOf("/")+1):"",x||(i=T=>{var E=new XMLHttpRequest;return E.open("GET",T,!1),E.send(null),E.responseText},O&&(g=T=>{var E=new XMLHttpRequest;return E.open("GET",T,!1),E.responseType="arraybuffer",E.send(null),new Uint8Array(E.response)}),d=(T,E,k)=>{var C=new XMLHttpRequest;C.open("GET",T,!0),C.responseType="arraybuffer",C.onload=()=>{C.status==200||C.status==0&&C.response?E(C.response):k()},C.onerror=k,C.send(null)}));x&&typeof performance>"u"&&(o.g.performance=o(6953).performance);var L=console.log.bind(console),N=console.warn.bind(console);x&&(_(),L=T=>m.writeSync(1,T+`
-`),N=T=>m.writeSync(2,T+`
-`));var H,M=t.print||L,j=t.printErr||N;Object.assign(t,v),v=null,t.thisProgram&&(w=t.thisProgram),t.quit&&(S=t.quit),t.wasmBinary&&(H=t.wasmBinary);var Z=t.noExitRuntime||!1;typeof WebAssembly!="object"&&ge("no native wasm support detected");var X,Q,ee,ue,Ae,xe,oe,we,ye=!1,ke=typeof TextDecoder<"u"?new TextDecoder("utf8"):void 0;function Ne(T,E,k){var C=(E>>>=0)+k;for(k=E;T[k]&&!(k>=C);)++k;if(16(z=(240&z)==224?(15&z)<<12|V<<6|K:(7&z)<<18|V<<12|K<<6|63&T[E++])?C+=String.fromCharCode(z):(z-=65536,C+=String.fromCharCode(55296|z>>10,56320|1023&z))}}else C+=String.fromCharCode(z)}return C}function Te(T,E){return(T>>>=0)?Ne(h(),T,E):""}function $e(T,E,k,C){if(!(0>>=0;C=k+C-1;for(var V=0;V=K&&(K=65536+((1023&K)<<10)|1023&T.charCodeAt(++V)),127>=K){if(k>=C)break;E[k++>>>0]=K}else{if(2047>=K){if(k+1>=C)break;E[k++>>>0]=192|K>>6}else{if(65535>=K){if(k+2>=C)break;E[k++>>>0]=224|K>>12}else{if(k+3>=C)break;E[k++>>>0]=240|K>>18,E[k++>>>0]=128|K>>12&63}E[k++>>>0]=128|K>>6&63}E[k++>>>0]=128|63&K}}return E[k>>>0]=0,k-z}function Ce(T){for(var E=0,k=0;k=C?E++:2047>=C?E+=2:55296<=C&&57343>=C?(E+=4,++k):E+=3}return E}function Ee(T){ee=T,t.HEAP8=ue=new Int8Array(T),t.HEAP16=new Int16Array(T),t.HEAP32=xe=new Int32Array(T),t.HEAPU8=Ae=new Uint8Array(T),t.HEAPU16=new Uint16Array(T),t.HEAPU32=oe=new Uint32Array(T),t.HEAPF32=new Float32Array(T),t.HEAPF64=we=new Float64Array(T)}I&&(ee=t.buffer);var Oe=t.INITIAL_MEMORY||16777216;if(I)X=t.wasmMemory,ee=t.buffer;else if(t.wasmMemory)X=t.wasmMemory;else if(!((X=new WebAssembly.Memory({initial:Oe/65536,maximum:65536,shared:!0})).buffer instanceof SharedArrayBuffer))throw j("requested a shared WebAssembly.Memory but the returned buffer is not a SharedArrayBuffer, indicating that while the browser has SharedArrayBuffer it does not have WebAssembly threads support - you may need to set a flag"),x&&console.log("(on node you may need: --experimental-wasm-threads --experimental-wasm-bulk-memory and also use a recent version)"),Error("bad memory");X&&(ee=X.buffer),Oe=ee.byteLength,Ee(ee);var Be,Ve=[],Ge=[],Xe=[],Ze=[];function qe(){return Z||!1}function Ue(){var T=t.preRun.shift();Ve.unshift(T)}var Ie,je=0,Ye=null;function ge(T){throw I?postMessage({cmd:"onAbort",arg:T}):t.onAbort&&t.onAbort(T),j(T="Aborted("+T+")"),ye=!0,T=new WebAssembly.RuntimeError(T+". Build with -sASSERTIONS for more info."),r(T),T}function ft(){return Ie.startsWith("data:application/octet-stream;base64,")}function lt(){var T=Ie;try{if(T==Ie&&H)return new Uint8Array(H);if(g)return g(T);throw"both async and sync fetching of the wasm failed"}catch(E){ge(E)}}Ie="ort-wasm-threaded.wasm",ft()||(Ie=B(Ie));var Pt={};function Je(T){this.name="ExitStatus",this.message="Program terminated with exit("+T+")",this.status=T}function ct(T){(T=re.Vb[T])||ge(),re.mc(T)}function dt(T){var E=re.Cc();if(!E)return 6;re.ac.push(E),re.Vb[T.Ub]=E,E.Ub=T.Ub;var k={cmd:"run",start_routine:T.Ic,arg:T.zc,pthread_ptr:T.Ub};return E.$b=()=>{k.time=performance.now(),E.postMessage(k,T.Nc)},E.loaded&&(E.$b(),delete E.$b),0}function Re(T){if(I)return J(1,1,T);qe()||(re.oc(),t.onExit&&t.onExit(T),ye=!0),S(T,new Je(T))}function it(T,E){if(!E&&I)throw kt(T),"unwind";qe()||I||(Wt(),rt(Xe),qt(0),Ft[1].length&&Nt(1,10),Ft[2].length&&Nt(2,10),re.oc()),Re(T)}var re={Yb:[],ac:[],qc:[],Vb:{},fc:function(){I&&re.Ec()},Pc:function(){},Ec:function(){re.receiveObjectTransfer=re.Gc,re.threadInitTLS=re.pc,re.setExitStatus=re.nc,Z=!1},nc:function(){},oc:function(){for(var T of Object.values(re.Vb))re.mc(T);for(T of re.Yb)T.terminate();re.Yb=[]},mc:function(T){var E=T.Ub;delete re.Vb[E],re.Yb.push(T),re.ac.splice(re.ac.indexOf(T),1),T.Ub=0,Rt(E)},Gc:function(){},pc:function(){re.qc.forEach(T=>T())},Fc:function(T,E){T.onmessage=k=>{var C=(k=k.data).cmd;if(T.Ub&&(re.Bc=T.Ub),k.targetThread&&k.targetThread!=Mt()){var z=re.Vb[k.Qc];z?z.postMessage(k,k.transferList):j('Internal error! Worker sent a message "'+C+'" to target pthread '+k.targetThread+", but that thread no longer exists!")}else C==="processProxyingQueue"?F(k.queue):C==="spawnThread"?dt(k):C==="cleanupThread"?ct(k.thread):C==="killThread"?(k=k.thread,C=re.Vb[k],delete re.Vb[k],C.terminate(),Rt(k),re.ac.splice(re.ac.indexOf(C),1),C.Ub=0):C==="cancelThread"?re.Vb[k.thread].postMessage({cmd:"cancel"}):C==="loaded"?(T.loaded=!0,E&&E(T),T.$b&&(T.$b(),delete T.$b)):C==="print"?M("Thread "+k.threadId+": "+k.text):C==="printErr"?j("Thread "+k.threadId+": "+k.text):C==="alert"?alert("Thread "+k.threadId+": "+k.text):k.target==="setimmediate"?T.postMessage(k):C==="onAbort"?t.onAbort&&t.onAbort(k.arg):C&&j("worker sent an unknown command "+C);re.Bc=void 0},T.onerror=k=>{throw j("worker sent an error! "+k.filename+":"+k.lineno+": "+k.message),k},x&&(T.on("message",function(k){T.onmessage({data:k})}),T.on("error",function(k){T.onerror(k)}),T.on("detachedExit",function(){})),T.postMessage({cmd:"load",urlOrBlob:t.mainScriptUrlOrBlob||l,wasmMemory:X,wasmModule:Q})},yc:function(){var T=B("ort-wasm-threaded.worker.js");re.Yb.push(new Worker(T))},Cc:function(){return re.Yb.length==0&&(re.yc(),re.Fc(re.Yb[0])),re.Yb.pop()}};function rt(T){for(;0>2>>>0];T=p()[T+48>>2>>>0],Zt(E,E-T),de(E)};var Qe=[];function ve(T){var E=Qe[T];return E||(T>=Qe.length&&(Qe.length=T+1),Qe[T]=E=Be.get(T)),E}t.invokeEntryPoint=function(T,E){T=ve(T)(E),qe()?re.nc(T):Kt(T)};var ot,pt,st=[],ae=0,ie=0;function se(T){this.Zb=T,this.Sb=T-24,this.xc=function(E){u()[this.Sb+4>>2>>>0]=E},this.bc=function(){return u()[this.Sb+4>>2>>>0]},this.wc=function(E){u()[this.Sb+8>>2>>>0]=E},this.Dc=function(){return u()[this.Sb+8>>2>>>0]},this.rc=function(){p()[this.Sb>>2>>>0]=0},this.hc=function(E){E=E?1:0,a()[this.Sb+12>>0>>>0]=E},this.uc=function(){return a()[this.Sb+12>>0>>>0]!=0},this.ic=function(E){E=E?1:0,a()[this.Sb+13>>0>>>0]=E},this.kc=function(){return a()[this.Sb+13>>0>>>0]!=0},this.fc=function(E,k){this.cc(0),this.xc(E),this.wc(k),this.rc(),this.hc(!1),this.ic(!1)},this.sc=function(){Atomics.add(p(),this.Sb>>2,1)},this.Hc=function(){return Atomics.sub(p(),this.Sb>>2,1)===1},this.cc=function(E){u()[this.Sb+16>>2>>>0]=E},this.tc=function(){return u()[this.Sb+16>>2>>>0]},this.vc=function(){if(Jt(this.bc()))return u()[this.Zb>>2>>>0];var E=this.tc();return E!==0?E:this.Zb}}function gt(T){return Gt(new se(T).Sb)}function at(T,E,k,C){return I?J(3,1,T,E,k,C):mt(T,E,k,C)}function mt(T,E,k,C){if(typeof SharedArrayBuffer>"u")return j("Current environment does not support SharedArrayBuffer, pthreads are not available!"),6;var z=[];return I&&z.length===0?at(T,E,k,C):(T={Ic:k,Ub:T,zc:C,Nc:z},I?(T.Oc="spawnThread",postMessage(T,z),0):dt(T))}function bt(T,E,k){return I?J(4,1,T,E,k):0}function yt(T,E){if(I)return J(5,1,T,E)}function _t(T,E){if(I)return J(6,1,T,E)}function wt(T,E,k){if(I)return J(7,1,T,E,k)}function vt(T,E,k){return I?J(8,1,T,E,k):0}function xt(T,E){if(I)return J(9,1,T,E)}function Tt(T,E,k){if(I)return J(10,1,T,E,k)}function St(T,E,k,C){if(I)return J(11,1,T,E,k,C)}function At(T,E,k,C){if(I)return J(12,1,T,E,k,C)}function Ot(T,E,k,C){if(I)return J(13,1,T,E,k,C)}function Et(T){if(I)return J(14,1,T)}function P(T,E){if(I)return J(15,1,T,E)}function D(T,E,k){if(I)return J(16,1,T,E,k)}function F(T){Atomics.store(p(),T>>2,1),Mt()&&Yt(T),Atomics.compareExchange(p(),T>>2,1,0)}function R(T){return u()[T>>>2]+4294967296*p()[T+4>>>2]}function U(T,E,k,C,z,V){return I?J(17,1,T,E,k,C,z,V):-52}function W(T,E,k,C,z,V){if(I)return J(18,1,T,E,k,C,z,V)}function Y(T){var E=Ce(T)+1,k=Lt(E);return k&&$e(T,a(),k,E),k}function te(T,E,k){function C(me){return(me=me.toTimeString().match(/\(([A-Za-z ]+)\)$/))?me[1]:"GMT"}if(I)return J(19,1,T,E,k);var z=new Date().getFullYear(),V=new Date(z,0,1),K=new Date(z,6,1);z=V.getTimezoneOffset();var ne=K.getTimezoneOffset(),pe=Math.max(z,ne);p()[T>>2>>>0]=60*pe,p()[E>>2>>>0]=+(z!=ne),T=C(V),E=C(K),T=Y(T),E=Y(E),ne>2>>>0]=T,u()[k+4>>2>>>0]=E):(u()[k>>2>>>0]=E,u()[k+4>>2>>>0]=T)}function J(T,E){var k=arguments.length-2,C=arguments;return It(()=>{for(var z=jt(8*k),V=z>>3,K=0;K>>0]=ne}return Xt(T,k,z,E)})}t.executeNotifiedProxyingQueue=F,pt=x?()=>{var T=process.hrtime();return 1e3*T[0]+T[1]/1e6}:I?()=>performance.now()-t.__performance_now_clock_drift:()=>performance.now();var ce,Se=[],Le={};function Fe(){if(!ce){var T,E={USER:"web_user",LOGNAME:"web_user",PATH:"/",PWD:"/",HOME:"/home/web_user",LANG:(typeof navigator=="object"&&navigator.languages&&navigator.languages[0]||"C").replace("-","_")+".UTF-8",_:w||"./this.program"};for(T in Le)Le[T]===void 0?delete E[T]:E[T]=Le[T];var k=[];for(T in E)k.push(T+"="+E[T]);ce=k}return ce}function G(T,E){if(I)return J(20,1,T,E);var k=0;return Fe().forEach(function(C,z){var V=E+k;for(z=u()[T+4*z>>2>>>0]=V,V=0;V>0>>>0]=C.charCodeAt(V);a()[z>>0>>>0]=0,k+=C.length+1}),0}function be(T,E){if(I)return J(21,1,T,E);var k=Fe();u()[T>>2>>>0]=k.length;var C=0;return k.forEach(function(z){C+=z.length+1}),u()[E>>2>>>0]=C,0}function Pe(T){return I?J(22,1,T):52}function We(T,E,k,C){return I?J(23,1,T,E,k,C):52}function et(T,E,k,C,z){return I?J(24,1,T,E,k,C,z):70}var Ft=[null,[],[]];function Nt(T,E){var k=Ft[T];E===0||E===10?((T===1?M:j)(Ne(k,0)),k.length=0):k.push(E)}function zt(T,E,k,C){if(I)return J(25,1,T,E,k,C);for(var z=0,V=0;V>2>>>0],ne=u()[E+4>>2>>>0];E+=8;for(var pe=0;pe>>0]);z+=ne}return u()[C>>2>>>0]=z,0}var ze=0;function Dt(T){return T%4==0&&(T%100!=0||T%400==0)}var Bt=[31,29,31,30,31,30,31,31,30,31,30,31],Ut=[31,28,31,30,31,30,31,31,30,31,30,31];function Vt(T,E,k,C){function z(q,_e,De){for(q=typeof q=="number"?q.toString():q||"";q.length<_e;)q=De[0]+q;return q}function V(q,_e){return z(q,_e,"0")}function K(q,_e){function De(ht){return 0>ht?-1:0tt-q.getDate())){q.setDate(q.getDate()+_e);break}_e-=tt-q.getDate()+1,q.setDate(1),11>De?q.setMonth(De+1):(q.setMonth(0),q.setFullYear(q.getFullYear()+1))}return De=new Date(q.getFullYear()+1,0,4),_e=ne(new Date(q.getFullYear(),0,4)),De=ne(De),0>=K(_e,q)?0>=K(De,q)?q.getFullYear()+1:q.getFullYear():q.getFullYear()-1}var me=p()[C+40>>2>>>0];for(var Me in C={Lc:p()[C>>2>>>0],Kc:p()[C+4>>2>>>0],dc:p()[C+8>>2>>>0],jc:p()[C+12>>2>>>0],ec:p()[C+16>>2>>>0],Xb:p()[C+20>>2>>>0],Tb:p()[C+24>>2>>>0],Wb:p()[C+28>>2>>>0],Rc:p()[C+32>>2>>>0],Jc:p()[C+36>>2>>>0],Mc:me?Te(me):""},k=Te(k),me={"%c":"%a %b %d %H:%M:%S %Y","%D":"%m/%d/%y","%F":"%Y-%m-%d","%h":"%b","%r":"%I:%M:%S %p","%R":"%H:%M","%T":"%H:%M:%S","%x":"%m/%d/%y","%X":"%H:%M:%S","%Ec":"%c","%EC":"%C","%Ex":"%m/%d/%y","%EX":"%H:%M:%S","%Ey":"%y","%EY":"%Y","%Od":"%d","%Oe":"%e","%OH":"%H","%OI":"%I","%Om":"%m","%OM":"%M","%OS":"%S","%Ou":"%u","%OU":"%U","%OV":"%V","%Ow":"%w","%OW":"%W","%Oy":"%y"})k=k.replace(new RegExp(Me,"g"),me[Me]);var Ke="Sunday Monday Tuesday Wednesday Thursday Friday Saturday".split(" "),He="January February March April May June July August September October November December".split(" ");for(Me in me={"%a":function(q){return Ke[q.Tb].substring(0,3)},"%A":function(q){return Ke[q.Tb]},"%b":function(q){return He[q.ec].substring(0,3)},"%B":function(q){return He[q.ec]},"%C":function(q){return V((q.Xb+1900)/100|0,2)},"%d":function(q){return V(q.jc,2)},"%e":function(q){return z(q.jc,2," ")},"%g":function(q){return pe(q).toString().substring(2)},"%G":function(q){return pe(q)},"%H":function(q){return V(q.dc,2)},"%I":function(q){return(q=q.dc)==0?q=12:12q.dc?"AM":"PM"},"%S":function(q){return V(q.Lc,2)},"%t":function(){return" "},"%u":function(q){return q.Tb||7},"%U":function(q){return V(Math.floor((q.Wb+7-q.Tb)/7),2)},"%V":function(q){var _e=Math.floor((q.Wb+7-(q.Tb+6)%7)/7);if(2>=(q.Tb+371-q.Wb-2)%7&&_e++,_e)_e==53&&((De=(q.Tb+371-q.Wb)%7)==4||De==3&&Dt(q.Xb)||(_e=1));else{_e=52;var De=(q.Tb+7-q.Wb-1)%7;(De==4||De==5&&Dt(q.Xb%400-1))&&_e++}return V(_e,2)},"%w":function(q){return q.Tb},"%W":function(q){return V(Math.floor((q.Wb+7-(q.Tb+6)%7)/7),2)},"%y":function(q){return(q.Xb+1900).toString().substring(2)},"%Y":function(q){return q.Xb+1900},"%z":function(q){var _e=0<=(q=q.Jc);return q=Math.abs(q)/60,(_e?"+":"-")+("0000"+(q/60*100+q%60)).slice(-4)},"%Z":function(q){return q.Mc},"%%":function(){return"%"}},k=k.replace(/%%/g,"\0\0"),me)k.includes(Me)&&(k=k.replace(new RegExp(Me,"g"),me[Me](C)));return Me=function(q){var _e=Array(Ce(q)+1);return $e(q,_e,0,_e.length),_e}(k=k.replace(/\0\0/g,"%")),Me.length>E?0:(function(q,_e){a().set(q,_e>>>0)}(Me,T),Me.length-1)}re.fc();var hn=[null,Re,kt,at,bt,yt,_t,wt,vt,xt,Tt,St,At,Ot,Et,P,D,U,W,te,G,be,Pe,We,et,zt],fn={b:function(T){return Lt(T+24)+24},n:function(T){return(T=new se(T)).uc()||(T.hc(!0),ae--),T.ic(!1),st.push(T),T.sc(),T.vc()},ma:function(T){throw j("Unexpected exception thrown, this is not properly supported - aborting"),ye=!0,T},x:function(){fe(0);var T=st.pop();if(T.Hc()&&!T.kc()){var E=T.Dc();E&&ve(E)(T.Zb),gt(T.Zb)}ie=0},e:function(){var T=ie;if(!T)return ze=0;var E=new se(T);E.cc(T);var k=E.bc();if(!k)return ze=0,T;for(var C=Array.prototype.slice.call(arguments),z=0;zF(C));else if(I)postMessage({targetThread:T,cmd:"processProxyingQueue",queue:C});else{if(!(T=re.Vb[T]))return;T.postMessage({cmd:"processProxyingQueue",queue:C})}return 1},Ea:function(){return-1},Pa:function(T,E){T=new Date(1e3*R(T)),p()[E>>2>>>0]=T.getUTCSeconds(),p()[E+4>>2>>>0]=T.getUTCMinutes(),p()[E+8>>2>>>0]=T.getUTCHours(),p()[E+12>>2>>>0]=T.getUTCDate(),p()[E+16>>2>>>0]=T.getUTCMonth(),p()[E+20>>2>>>0]=T.getUTCFullYear()-1900,p()[E+24>>2>>>0]=T.getUTCDay(),T=(T.getTime()-Date.UTC(T.getUTCFullYear(),0,1,0,0,0,0))/864e5|0,p()[E+28>>2>>>0]=T},Qa:function(T,E){T=new Date(1e3*R(T)),p()[E>>2>>>0]=T.getSeconds(),p()[E+4>>2>>>0]=T.getMinutes(),p()[E+8>>2>>>0]=T.getHours(),p()[E+12>>2>>>0]=T.getDate(),p()[E+16>>2>>>0]=T.getMonth(),p()[E+20>>2>>>0]=T.getFullYear()-1900,p()[E+24>>2>>>0]=T.getDay();var k=new Date(T.getFullYear(),0,1),C=(T.getTime()-k.getTime())/864e5|0;p()[E+28>>2>>>0]=C,p()[E+36>>2>>>0]=-60*T.getTimezoneOffset(),C=new Date(T.getFullYear(),6,1).getTimezoneOffset(),T=0|(C!=(k=k.getTimezoneOffset())&&T.getTimezoneOffset()==Math.min(k,C)),p()[E+32>>2>>>0]=T},Ra:function(T){var E=new Date(p()[T+20>>2>>>0]+1900,p()[T+16>>2>>>0],p()[T+12>>2>>>0],p()[T+8>>2>>>0],p()[T+4>>2>>>0],p()[T>>2>>>0],0),k=p()[T+32>>2>>>0],C=E.getTimezoneOffset(),z=new Date(E.getFullYear(),0,1),V=new Date(E.getFullYear(),6,1).getTimezoneOffset(),K=z.getTimezoneOffset(),ne=Math.min(K,V);return 0>k?p()[T+32>>2>>>0]=+(V!=K&&ne==C):0>2>>>0]=E.getDay(),k=(E.getTime()-z.getTime())/864e5|0,p()[T+28>>2>>>0]=k,p()[T>>2>>>0]=E.getSeconds(),p()[T+4>>2>>>0]=E.getMinutes(),p()[T+8>>2>>>0]=E.getHours(),p()[T+12>>2>>>0]=E.getDate(),p()[T+16>>2>>>0]=E.getMonth(),E.getTime()/1e3|0},Aa:U,Ba:W,Sa:function T(E,k,C){T.Ac||(T.Ac=!0,te(E,k,C))},y:function(){ge("")},U:function(){if(!x&&!O){var T="Blocking on the main thread is very dangerous, see https://emscripten.org/docs/porting/pthreads.html#blocking-on-the-main-browser-thread";ot||(ot={}),ot[T]||(ot[T]=1,x&&(T="warning: "+T),j(T))}},ra:function(){return 4294901760},B:pt,Ia:function(T,E,k){h().copyWithin(T>>>0,E>>>0,E+k>>>0)},F:function(){return x?o(3993).cpus().length:navigator.hardwareConcurrency},Da:function(T,E,k){Se.length=E,k>>=3;for(var C=0;C>>0];return(0>T?Pt[-T-1]:hn[T]).apply(null,Se)},qa:function(T){var E=h().length;if((T>>>=0)<=E||4294901760=k;k*=2){var C=E*(1+.2/k);C=Math.min(C,T+100663296);var z=Math;C=Math.max(T,C),z=z.min.call(z,4294901760,C+(65536-C%65536)%65536);e:{try{X.grow(z-ee.byteLength+65535>>>16),Ee(X.buffer);var V=1;break e}catch{}V=void 0}if(V)return!0}return!1},Na:function(){throw"unwind"},Ga:G,Ha:be,J:it,I:Pe,S:We,ga:et,R:zt,d:function(){return ze},na:function T(E,k){T.lc||(T.lc=function(){if(typeof crypto=="object"&&typeof crypto.getRandomValues=="function"){var z=new Uint8Array(1);return()=>(crypto.getRandomValues(z),z[0])}if(x)try{var V=o(Object(function(){var K=new Error("Cannot find module 'crypto'");throw K.code="MODULE_NOT_FOUND",K}()));return()=>V.randomBytes(1)[0]}catch{}return()=>ge("randomDevice")}());for(var C=0;C>0>>>0]=T.lc();return 0},ia:function(T,E,k){var C=he();try{return ve(T)(E,k)}catch(z){if(de(C),z!==z+0)throw z;fe(1,0)}},ja:function(T,E,k){var C=he();try{return ve(T)(E,k)}catch(z){if(de(C),z!==z+0)throw z;fe(1,0)}},K:function(T){var E=he();try{return ve(T)()}catch(k){if(de(E),k!==k+0)throw k;fe(1,0)}},f:function(T,E){var k=he();try{return ve(T)(E)}catch(C){if(de(k),C!==C+0)throw C;fe(1,0)}},P:function(T,E,k){var C=he();try{return ve(T)(E,k)}catch(z){if(de(C),z!==z+0)throw z;fe(1,0)}},Q:function(T,E,k){var C=he();try{return ve(T)(E,k)}catch(z){if(de(C),z!==z+0)throw z;fe(1,0)}},k:function(T,E,k){var C=he();try{return ve(T)(E,k)}catch(z){if(de(C),z!==z+0)throw z;fe(1,0)}},p:function(T,E,k,C){var z=he();try{return ve(T)(E,k,C)}catch(V){if(de(z),V!==V+0)throw V;fe(1,0)}},q:function(T,E,k,C,z){var V=he();try{return ve(T)(E,k,C,z)}catch(K){if(de(V),K!==K+0)throw K;fe(1,0)}},N:function(T,E,k,C,z,V){var K=he();try{return ve(T)(E,k,C,z,V)}catch(ne){if(de(K),ne!==ne+0)throw ne;fe(1,0)}},s:function(T,E,k,C,z,V){var K=he();try{return ve(T)(E,k,C,z,V)}catch(ne){if(de(K),ne!==ne+0)throw ne;fe(1,0)}},w:function(T,E,k,C,z,V,K){var ne=he();try{return ve(T)(E,k,C,z,V,K)}catch(pe){if(de(ne),pe!==pe+0)throw pe;fe(1,0)}},L:function(T,E,k,C,z,V,K,ne){var pe=he();try{return ve(T)(E,k,C,z,V,K,ne)}catch(me){if(de(pe),me!==me+0)throw me;fe(1,0)}},E:function(T,E,k,C,z,V,K,ne,pe,me,Me,Ke){var He=he();try{return ve(T)(E,k,C,z,V,K,ne,pe,me,Me,Ke)}catch(q){if(de(He),q!==q+0)throw q;fe(1,0)}},aa:function(T,E,k,C,z,V,K,ne){var pe=he();try{return un(T,E,k,C,z,V,K,ne)}catch(me){if(de(pe),me!==me+0)throw me;fe(1,0)}},_:function(T,E,k,C,z,V,K){var ne=he();try{return en(T,E,k,C,z,V,K)}catch(pe){if(de(ne),pe!==pe+0)throw pe;fe(1,0)}},Z:function(T,E,k,C,z){var V=he();try{return ln(T,E,k,C,z)}catch(K){if(de(V),K!==K+0)throw K;fe(1,0)}},ca:function(T,E,k,C){var z=he();try{return sn(T,E,k,C)}catch(V){if(de(z),V!==V+0)throw V;fe(1,0)}},$:function(T){var E=he();try{return Qt(T)}catch(k){if(de(E),k!==k+0)throw k;fe(1,0)}},ba:function(T,E){var k=he();try{return an(T,E)}catch(C){if(de(k),C!==C+0)throw C;fe(1,0)}},Y:function(T,E,k){var C=he();try{return tn(T,E,k)}catch(z){if(de(C),z!==z+0)throw z;fe(1,0)}},g:function(T){var E=he();try{ve(T)()}catch(k){if(de(E),k!==k+0)throw k;fe(1,0)}},r:function(T,E){var k=he();try{ve(T)(E)}catch(C){if(de(k),C!==C+0)throw C;fe(1,0)}},i:function(T,E,k){var C=he();try{ve(T)(E,k)}catch(z){if(de(C),z!==z+0)throw z;fe(1,0)}},ha:function(T,E,k,C){var z=he();try{ve(T)(E,k,C)}catch(V){if(de(z),V!==V+0)throw V;fe(1,0)}},m:function(T,E,k,C){var z=he();try{ve(T)(E,k,C)}catch(V){if(de(z),V!==V+0)throw V;fe(1,0)}},v:function(T,E,k,C,z){var V=he();try{ve(T)(E,k,C,z)}catch(K){if(de(V),K!==K+0)throw K;fe(1,0)}},u:function(T,E,k,C,z,V){var K=he();try{ve(T)(E,k,C,z,V)}catch(ne){if(de(K),ne!==ne+0)throw ne;fe(1,0)}},O:function(T,E,k,C,z,V,K){var ne=he();try{ve(T)(E,k,C,z,V,K)}catch(pe){if(de(ne),pe!==pe+0)throw pe;fe(1,0)}},A:function(T,E,k,C,z,V,K,ne){var pe=he();try{ve(T)(E,k,C,z,V,K,ne)}catch(me){if(de(pe),me!==me+0)throw me;fe(1,0)}},ka:function(T,E,k,C,z,V,K,ne,pe){var me=he();try{ve(T)(E,k,C,z,V,K,ne,pe)}catch(Me){if(de(me),Me!==Me+0)throw Me;fe(1,0)}},C:function(T,E,k,C,z,V,K,ne,pe,me,Me){var Ke=he();try{ve(T)(E,k,C,z,V,K,ne,pe,me,Me)}catch(He){if(de(Ke),He!==He+0)throw He;fe(1,0)}},D:function(T,E,k,C,z,V,K,ne,pe,me,Me,Ke,He,q,_e,De){var tt=he();try{ve(T)(E,k,C,z,V,K,ne,pe,me,Me,Ke,He,q,_e,De)}catch(ht){if(de(tt),ht!==ht+0)throw ht;fe(1,0)}},fa:function(T,E,k,C,z,V,K,ne){var pe=he();try{nn(T,E,k,C,z,V,K,ne)}catch(me){if(de(pe),me!==me+0)throw me;fe(1,0)}},da:function(T,E,k,C,z,V,K,ne,pe,me,Me,Ke){var He=he();try{on(T,E,k,C,z,V,K,ne,pe,me,Me,Ke)}catch(q){if(de(He),q!==q+0)throw q;fe(1,0)}},ea:function(T,E,k,C,z,V){var K=he();try{rn(T,E,k,C,z,V)}catch(ne){if(de(K),ne!==ne+0)throw ne;fe(1,0)}},o:function(T){return T},a:X||t.wasmMemory,G:function(T){ze=T},la:Vt,z:function(T,E,k,C){return Vt(T,E,k,C)}};(function(){function T(z,V){t.asm=z.exports,re.qc.push(t.asm.sb),Be=t.asm.ub,Ge.unshift(t.asm.Va),Q=V,I||(je--,t.monitorRunDependencies&&t.monitorRunDependencies(je),je==0&&Ye&&(z=Ye,Ye=null,z()))}function E(z){T(z.instance,z.module)}function k(z){return function(){if(!H&&(A||O)){if(typeof fetch=="function"&&!Ie.startsWith("file://"))return fetch(Ie,{credentials:"same-origin"}).then(function(V){if(!V.ok)throw"failed to load wasm binary file at '"+Ie+"'";return V.arrayBuffer()}).catch(function(){return lt()});if(d)return new Promise(function(V,K){d(Ie,function(ne){V(new Uint8Array(ne))},K)})}return Promise.resolve().then(function(){return lt()})}().then(function(V){return WebAssembly.instantiate(V,C)}).then(function(V){return V}).then(z,function(V){j("failed to asynchronously prepare wasm: "+V),ge(V)})}var C={a:fn};if(I||(je++,t.monitorRunDependencies&&t.monitorRunDependencies(je)),t.instantiateWasm)try{return t.instantiateWasm(C,T)}catch(z){return j("Module.instantiateWasm callback failed with error: "+z),!1}(H||typeof WebAssembly.instantiateStreaming!="function"||ft()||Ie.startsWith("file://")||x||typeof fetch!="function"?k(E):fetch(Ie,{credentials:"same-origin"}).then(function(z){return WebAssembly.instantiateStreaming(z,C).then(E,function(V){return j("wasm streaming compile failed: "+V),j("falling back to ArrayBuffer instantiation"),k(E)})})).catch(r)})(),t.___wasm_call_ctors=function(){return(t.___wasm_call_ctors=t.asm.Va).apply(null,arguments)},t._OrtInit=function(){return(t._OrtInit=t.asm.Wa).apply(null,arguments)},t._OrtCreateSessionOptions=function(){return(t._OrtCreateSessionOptions=t.asm.Xa).apply(null,arguments)},t._OrtAppendExecutionProvider=function(){return(t._OrtAppendExecutionProvider=t.asm.Ya).apply(null,arguments)},t._OrtAddSessionConfigEntry=function(){return(t._OrtAddSessionConfigEntry=t.asm.Za).apply(null,arguments)},t._OrtReleaseSessionOptions=function(){return(t._OrtReleaseSessionOptions=t.asm._a).apply(null,arguments)},t._OrtCreateSession=function(){return(t._OrtCreateSession=t.asm.$a).apply(null,arguments)},t._OrtReleaseSession=function(){return(t._OrtReleaseSession=t.asm.ab).apply(null,arguments)},t._OrtGetInputCount=function(){return(t._OrtGetInputCount=t.asm.bb).apply(null,arguments)},t._OrtGetOutputCount=function(){return(t._OrtGetOutputCount=t.asm.cb).apply(null,arguments)},t._OrtGetInputName=function(){return(t._OrtGetInputName=t.asm.db).apply(null,arguments)},t._OrtGetOutputName=function(){return(t._OrtGetOutputName=t.asm.eb).apply(null,arguments)},t._OrtFree=function(){return(t._OrtFree=t.asm.fb).apply(null,arguments)},t._OrtCreateTensor=function(){return(t._OrtCreateTensor=t.asm.gb).apply(null,arguments)},t._OrtGetTensorData=function(){return(t._OrtGetTensorData=t.asm.hb).apply(null,arguments)},t._OrtReleaseTensor=function(){return(t._OrtReleaseTensor=t.asm.ib).apply(null,arguments)},t._OrtCreateRunOptions=function(){return(t._OrtCreateRunOptions=t.asm.jb).apply(null,arguments)},t._OrtAddRunConfigEntry=function(){return(t._OrtAddRunConfigEntry=t.asm.kb).apply(null,arguments)},t._OrtReleaseRunOptions=function(){return(t._OrtReleaseRunOptions=t.asm.lb).apply(null,arguments)},t._OrtRun=function(){return(t._OrtRun=t.asm.mb).apply(null,arguments)},t._OrtEndProfiling=function(){return(t._OrtEndProfiling=t.asm.nb).apply(null,arguments)};var Mt=t._pthread_self=function(){return(Mt=t._pthread_self=t.asm.ob).apply(null,arguments)},Lt=t._malloc=function(){return(Lt=t._malloc=t.asm.pb).apply(null,arguments)},Gt=t._free=function(){return(Gt=t._free=t.asm.qb).apply(null,arguments)},qt=t._fflush=function(){return(qt=t._fflush=t.asm.rb).apply(null,arguments)};t.__emscripten_tls_init=function(){return(t.__emscripten_tls_init=t.asm.sb).apply(null,arguments)};var Wt=t.___funcs_on_exit=function(){return(Wt=t.___funcs_on_exit=t.asm.tb).apply(null,arguments)},Ht=t.__emscripten_thread_init=function(){return(Ht=t.__emscripten_thread_init=t.asm.vb).apply(null,arguments)};t.__emscripten_thread_crashed=function(){return(t.__emscripten_thread_crashed=t.asm.wb).apply(null,arguments)};var Ct,Xt=t._emscripten_run_in_main_runtime_thread_js=function(){return(Xt=t._emscripten_run_in_main_runtime_thread_js=t.asm.xb).apply(null,arguments)},Yt=t.__emscripten_proxy_execute_task_queue=function(){return(Yt=t.__emscripten_proxy_execute_task_queue=t.asm.yb).apply(null,arguments)},Rt=t.__emscripten_thread_free_data=function(){return(Rt=t.__emscripten_thread_free_data=t.asm.zb).apply(null,arguments)},Kt=t.__emscripten_thread_exit=function(){return(Kt=t.__emscripten_thread_exit=t.asm.Ab).apply(null,arguments)},fe=t._setThrew=function(){return(fe=t._setThrew=t.asm.Bb).apply(null,arguments)},Zt=t._emscripten_stack_set_limits=function(){return(Zt=t._emscripten_stack_set_limits=t.asm.Cb).apply(null,arguments)},he=t.stackSave=function(){return(he=t.stackSave=t.asm.Db).apply(null,arguments)},de=t.stackRestore=function(){return(de=t.stackRestore=t.asm.Eb).apply(null,arguments)},jt=t.stackAlloc=function(){return(jt=t.stackAlloc=t.asm.Fb).apply(null,arguments)},$t=t.___cxa_can_catch=function(){return($t=t.___cxa_can_catch=t.asm.Gb).apply(null,arguments)},Jt=t.___cxa_is_pointer_type=function(){return(Jt=t.___cxa_is_pointer_type=t.asm.Hb).apply(null,arguments)},Qt=t.dynCall_j=function(){return(Qt=t.dynCall_j=t.asm.Ib).apply(null,arguments)},en=t.dynCall_iiiiij=function(){return(en=t.dynCall_iiiiij=t.asm.Jb).apply(null,arguments)},tn=t.dynCall_jii=function(){return(tn=t.dynCall_jii=t.asm.Kb).apply(null,arguments)},nn=t.dynCall_viiiiij=function(){return(nn=t.dynCall_viiiiij=t.asm.Lb).apply(null,arguments)},rn=t.dynCall_vjji=function(){return(rn=t.dynCall_vjji=t.asm.Mb).apply(null,arguments)},on=t.dynCall_viiijjjii=function(){return(on=t.dynCall_viiijjjii=t.asm.Nb).apply(null,arguments)},sn=t.dynCall_iij=function(){return(sn=t.dynCall_iij=t.asm.Ob).apply(null,arguments)},an=t.dynCall_ji=function(){return(an=t.dynCall_ji=t.asm.Pb).apply(null,arguments)},un=t.dynCall_iiiiiij=function(){return(un=t.dynCall_iiiiiij=t.asm.Qb).apply(null,arguments)},ln=t.dynCall_iiij=function(){return(ln=t.dynCall_iiij=t.asm.Rb).apply(null,arguments)};function cn(){function T(){if(!Ct&&(Ct=!0,t.calledRun=!0,!ye)&&(I||rt(Ge),e(t),t.onRuntimeInitialized&&t.onRuntimeInitialized(),!I)){if(t.postRun)for(typeof t.postRun=="function"&&(t.postRun=[t.postRun]);t.postRun.length;){var E=t.postRun.shift();Ze.unshift(E)}rt(Ze)}}if(!(0{var l,c=(l=(l=typeof document<"u"&&document.currentScript?document.currentScript.src:void 0)||"/index.js",function(f){var a,h,p;f=f||{},a||(a=f!==void 0?f:{}),a.ready=new Promise(function(P,D){h=P,p=D});var u,s,t,e,r,i,d=Object.assign({},a),g="./this.program",m=(P,D)=>{throw D},b=typeof window=="object",_=typeof importScripts=="function",v=typeof process=="object"&&typeof process.versions=="object"&&typeof process.versions.node=="string",w="";v?(w=_?o(908).dirname(w)+"/":"//",i=()=>{r||(e=o(1384),r=o(908))},u=function(P,D){return i(),P=r.normalize(P),e.readFileSync(P,D?void 0:"utf8")},t=P=>((P=u(P,!0)).buffer||(P=new Uint8Array(P)),P),s=(P,D,F)=>{i(),P=r.normalize(P),e.readFile(P,function(R,U){R?F(R):D(U.buffer)})},1{if(x||0{var D=new XMLHttpRequest;return D.open("GET",P,!1),D.send(null),D.responseText},_&&(t=P=>{var D=new XMLHttpRequest;return D.open("GET",P,!1),D.responseType="arraybuffer",D.send(null),new Uint8Array(D.response)}),s=(P,D,F)=>{var R=new XMLHttpRequest;R.open("GET",P,!0),R.responseType="arraybuffer",R.onload=()=>{R.status==200||R.status==0&&R.response?D(R.response):F()},R.onerror=F,R.send(null)});var S,A=a.print||console.log.bind(console),O=a.printErr||console.warn.bind(console);Object.assign(a,d),d=null,a.thisProgram&&(g=a.thisProgram),a.quit&&(m=a.quit),a.wasmBinary&&(S=a.wasmBinary);var x=a.noExitRuntime||!1;typeof WebAssembly!="object"&&Ee("no native wasm support detected");var I,$,B,L,N,H,M=!1,j=typeof TextDecoder<"u"?new TextDecoder("utf8"):void 0;function Z(P,D,F){var R=(D>>>=0)+F;for(F=D;P[F]&&!(F>=R);)++F;if(16(U=(240&U)==224?(15&U)<<12|W<<6|Y:(7&U)<<18|W<<12|Y<<6|63&P[D++])?R+=String.fromCharCode(U):(U-=65536,R+=String.fromCharCode(55296|U>>10,56320|1023&U))}}else R+=String.fromCharCode(U)}return R}function X(P,D){return(P>>>=0)?Z(L,P,D):""}function Q(P,D,F,R){if(!(0>>=0;R=F+R-1;for(var W=0;W=Y&&(Y=65536+((1023&Y)<<10)|1023&P.charCodeAt(++W)),127>=Y){if(F>=R)break;D[F++>>>0]=Y}else{if(2047>=Y){if(F+1>=R)break;D[F++>>>0]=192|Y>>6}else{if(65535>=Y){if(F+2>=R)break;D[F++>>>0]=224|Y>>12}else{if(F+3>=R)break;D[F++>>>0]=240|Y>>18,D[F++>>>0]=128|Y>>12&63}D[F++>>>0]=128|Y>>6&63}D[F++>>>0]=128|63&Y}}return D[F>>>0]=0,F-U}function ee(P){for(var D=0,F=0;F=R?D++:2047>=R?D+=2:55296<=R&&57343>=R?(D+=4,++F):D+=3}return D}function ue(){var P=I.buffer;$=P,a.HEAP8=B=new Int8Array(P),a.HEAP16=new Int16Array(P),a.HEAP32=N=new Int32Array(P),a.HEAPU8=L=new Uint8Array(P),a.HEAPU16=new Uint16Array(P),a.HEAPU32=H=new Uint32Array(P),a.HEAPF32=new Float32Array(P),a.HEAPF64=new Float64Array(P)}var Ae,xe=[],oe=[],we=[],ye=[],ke=0;function Ne(){var P=a.preRun.shift();xe.unshift(P)}var Te,$e=0,Ce=null;function Ee(P){throw a.onAbort&&a.onAbort(P),O(P="Aborted("+P+")"),M=!0,P=new WebAssembly.RuntimeError(P+". Build with -sASSERTIONS for more info."),p(P),P}function Oe(){return Te.startsWith("data:application/octet-stream;base64,")}if(Te="ort-wasm.wasm",!Oe()){var Be=Te;Te=a.locateFile?a.locateFile(Be,w):w+Be}function Ve(){var P=Te;try{if(P==Te&&S)return new Uint8Array(S);if(t)return t(P);throw"both async and sync fetching of the wasm failed"}catch(D){Ee(D)}}function Ge(P){this.name="ExitStatus",this.message="Program terminated with exit("+P+")",this.status=P}function Xe(P){for(;0>2>>>0]=D},this.Eb=function(){return H[this.zb+4>>2>>>0]},this.Sb=function(D){H[this.zb+8>>2>>>0]=D},this.Wb=function(){return H[this.zb+8>>2>>>0]},this.Tb=function(){N[this.zb>>2>>>0]=0},this.Ib=function(D){B[this.zb+12>>0>>>0]=D?1:0},this.Pb=function(){return B[this.zb+12>>0>>>0]!=0},this.Jb=function(D){B[this.zb+13>>0>>>0]=D?1:0},this.Lb=function(){return B[this.zb+13>>0>>>0]!=0},this.Rb=function(D,F){this.Fb(0),this.Ub(D),this.Sb(F),this.Tb(),this.Ib(!1),this.Jb(!1)},this.Nb=function(){N[this.zb>>2>>>0]+=1},this.Xb=function(){var D=N[this.zb>>2>>>0];return N[this.zb>>2>>>0]=D-1,D===1},this.Fb=function(D){H[this.zb+16>>2>>>0]=D},this.Ob=function(){return H[this.zb+16>>2>>>0]},this.Qb=function(){if(mt(this.Eb()))return H[this.Db>>2>>>0];var D=this.Ob();return D!==0?D:this.Db}}function je(P){return ot(new Ie(P).zb)}var Ye=[];function ge(P){var D=Ye[P];return D||(P>=Ye.length&&(Ye.length=P+1),Ye[P]=D=Ae.get(P)),D}function ft(P){var D=ee(P)+1,F=ve(D);return F&&Q(P,B,F,D),F}var lt={};function Pt(){if(!Je){var P,D={USER:"web_user",LOGNAME:"web_user",PATH:"/",PWD:"/",HOME:"/home/web_user",LANG:(typeof navigator=="object"&&navigator.languages&&navigator.languages[0]||"C").replace("-","_")+".UTF-8",_:g||"./this.program"};for(P in lt)lt[P]===void 0?delete D[P]:D[P]=lt[P];var F=[];for(P in D)F.push(P+"="+D[P]);Je=F}return Je}var Je,ct=[null,[],[]];function dt(P,D){var F=ct[P];D===0||D===10?((P===1?A:O)(Z(F,0)),F.length=0):F.push(D)}var Re=0;function it(P){return P%4==0&&(P%100!=0||P%400==0)}var re=[31,29,31,30,31,30,31,31,30,31,30,31],rt=[31,28,31,30,31,30,31,31,30,31,30,31];function It(P,D,F,R){function U(G,be,Pe){for(G=typeof G=="number"?G.toString():G||"";G.lengthet?-1:0We-G.getDate())){G.setDate(G.getDate()+be);break}be-=We-G.getDate()+1,G.setDate(1),11>Pe?G.setMonth(Pe+1):(G.setMonth(0),G.setFullYear(G.getFullYear()+1))}return Pe=new Date(G.getFullYear()+1,0,4),be=te(new Date(G.getFullYear(),0,4)),Pe=te(Pe),0>=Y(be,G)?0>=Y(Pe,G)?G.getFullYear()+1:G.getFullYear():G.getFullYear()-1}var ce=N[R+40>>2>>>0];for(var Se in R={$b:N[R>>2>>>0],Zb:N[R+4>>2>>>0],Gb:N[R+8>>2>>>0],Kb:N[R+12>>2>>>0],Hb:N[R+16>>2>>>0],Cb:N[R+20>>2>>>0],Ab:N[R+24>>2>>>0],Bb:N[R+28>>2>>>0],bc:N[R+32>>2>>>0],Yb:N[R+36>>2>>>0],ac:ce?X(ce):""},F=X(F),ce={"%c":"%a %b %d %H:%M:%S %Y","%D":"%m/%d/%y","%F":"%Y-%m-%d","%h":"%b","%r":"%I:%M:%S %p","%R":"%H:%M","%T":"%H:%M:%S","%x":"%m/%d/%y","%X":"%H:%M:%S","%Ec":"%c","%EC":"%C","%Ex":"%m/%d/%y","%EX":"%H:%M:%S","%Ey":"%y","%EY":"%Y","%Od":"%d","%Oe":"%e","%OH":"%H","%OI":"%I","%Om":"%m","%OM":"%M","%OS":"%S","%Ou":"%u","%OU":"%U","%OV":"%V","%Ow":"%w","%OW":"%W","%Oy":"%y"})F=F.replace(new RegExp(Se,"g"),ce[Se]);var Le="Sunday Monday Tuesday Wednesday Thursday Friday Saturday".split(" "),Fe="January February March April May June July August September October November December".split(" ");for(Se in ce={"%a":function(G){return Le[G.Ab].substring(0,3)},"%A":function(G){return Le[G.Ab]},"%b":function(G){return Fe[G.Hb].substring(0,3)},"%B":function(G){return Fe[G.Hb]},"%C":function(G){return W((G.Cb+1900)/100|0,2)},"%d":function(G){return W(G.Kb,2)},"%e":function(G){return U(G.Kb,2," ")},"%g":function(G){return J(G).toString().substring(2)},"%G":function(G){return J(G)},"%H":function(G){return W(G.Gb,2)},"%I":function(G){return(G=G.Gb)==0?G=12:12G.Gb?"AM":"PM"},"%S":function(G){return W(G.$b,2)},"%t":function(){return" "},"%u":function(G){return G.Ab||7},"%U":function(G){return W(Math.floor((G.Bb+7-G.Ab)/7),2)},"%V":function(G){var be=Math.floor((G.Bb+7-(G.Ab+6)%7)/7);if(2>=(G.Ab+371-G.Bb-2)%7&&be++,be)be==53&&((Pe=(G.Ab+371-G.Bb)%7)==4||Pe==3&&it(G.Cb)||(be=1));else{be=52;var Pe=(G.Ab+7-G.Bb-1)%7;(Pe==4||Pe==5&&it(G.Cb%400-1))&&be++}return W(be,2)},"%w":function(G){return G.Ab},"%W":function(G){return W(Math.floor((G.Bb+7-(G.Ab+6)%7)/7),2)},"%y":function(G){return(G.Cb+1900).toString().substring(2)},"%Y":function(G){return G.Cb+1900},"%z":function(G){var be=0<=(G=G.Yb);return G=Math.abs(G)/60,(be?"+":"-")+("0000"+(G/60*100+G%60)).slice(-4)},"%Z":function(G){return G.ac},"%%":function(){return"%"}},F=F.replace(/%%/g,"\0\0"),ce)F.includes(Se)&&(F=F.replace(new RegExp(Se,"g"),ce[Se](R)));return Se=function(G){var be=Array(ee(G)+1);return Q(G,be,0,be.length),be}(F=F.replace(/\0\0/g,"%")),Se.length>D?0:(B.set(Se,P>>>0),Se.length-1)}var kt={a:function(P){return ve(P+24)+24},m:function(P){return(P=new Ie(P)).Pb()||(P.Ib(!0),qe--),P.Jb(!1),Ze.push(P),P.Nb(),P.Qb()},ia:function(P){throw O("Unexpected exception thrown, this is not properly supported - aborting"),M=!0,P},w:function(){ae(0);var P=Ze.pop();if(P.Xb()&&!P.Lb()){var D=P.Wb();D&&ge(D)(P.Db),je(P.Db)}Ue=0},d:function(){var P=Ue;if(!P)return Re=0;var D=new Ie(P);D.Fb(P);var F=D.Eb();if(!F)return Re=0,P;for(var R=Array.prototype.slice.call(arguments),U=0;U>>2]+4294967296*N[P+4>>>2])),N[D>>2>>>0]=P.getUTCSeconds(),N[D+4>>2>>>0]=P.getUTCMinutes(),N[D+8>>2>>>0]=P.getUTCHours(),N[D+12>>2>>>0]=P.getUTCDate(),N[D+16>>2>>>0]=P.getUTCMonth(),N[D+20>>2>>>0]=P.getUTCFullYear()-1900,N[D+24>>2>>>0]=P.getUTCDay(),N[D+28>>2>>>0]=(P.getTime()-Date.UTC(P.getUTCFullYear(),0,1,0,0,0,0))/864e5|0},Ea:function(P,D){P=new Date(1e3*(H[P>>>2]+4294967296*N[P+4>>>2])),N[D>>2>>>0]=P.getSeconds(),N[D+4>>2>>>0]=P.getMinutes(),N[D+8>>2>>>0]=P.getHours(),N[D+12>>2>>>0]=P.getDate(),N[D+16>>2>>>0]=P.getMonth(),N[D+20>>2>>>0]=P.getFullYear()-1900,N[D+24>>2>>>0]=P.getDay();var F=new Date(P.getFullYear(),0,1);N[D+28>>2>>>0]=(P.getTime()-F.getTime())/864e5|0,N[D+36>>2>>>0]=-60*P.getTimezoneOffset();var R=new Date(P.getFullYear(),6,1).getTimezoneOffset();F=F.getTimezoneOffset(),N[D+32>>2>>>0]=0|(R!=F&&P.getTimezoneOffset()==Math.min(F,R))},Fa:function(P){var D=new Date(N[P+20>>2>>>0]+1900,N[P+16>>2>>>0],N[P+12>>2>>>0],N[P+8>>2>>>0],N[P+4>>2>>>0],N[P>>2>>>0],0),F=N[P+32>>2>>>0],R=D.getTimezoneOffset(),U=new Date(D.getFullYear(),0,1),W=new Date(D.getFullYear(),6,1).getTimezoneOffset(),Y=U.getTimezoneOffset(),te=Math.min(Y,W);return 0>F?N[P+32>>2>>>0]=+(W!=Y&&te==R):0>2>>>0]=D.getDay(),N[P+28>>2>>>0]=(D.getTime()-U.getTime())/864e5|0,N[P>>2>>>0]=D.getSeconds(),N[P+4>>2>>>0]=D.getMinutes(),N[P+8>>2>>>0]=D.getHours(),N[P+12>>2>>>0]=D.getDate(),N[P+16>>2>>>0]=D.getMonth(),D.getTime()/1e3|0},sa:function(){return-52},ta:function(){},Ga:function P(D,F,R){P.Vb||(P.Vb=!0,function(U,W,Y){function te(Fe){return(Fe=Fe.toTimeString().match(/\(([A-Za-z ]+)\)$/))?Fe[1]:"GMT"}var J=new Date().getFullYear(),ce=new Date(J,0,1),Se=new Date(J,6,1);J=ce.getTimezoneOffset();var Le=Se.getTimezoneOffset();N[U>>2>>>0]=60*Math.max(J,Le),N[W>>2>>>0]=+(J!=Le),U=te(ce),W=te(Se),U=ft(U),W=ft(W),Le>2>>>0]=U,H[Y+4>>2>>>0]=W):(H[Y>>2>>>0]=W,H[Y+4>>2>>>0]=U)}(D,F,R))},B:function(){Ee("")},ma:function(){return 4294901760},I:v?()=>{var P=process.hrtime();return 1e3*P[0]+P[1]/1e6}:()=>performance.now(),xa:function(P,D,F){L.copyWithin(P>>>0,D>>>0,D+F>>>0)},G:function(P){var D=L.length;if(4294901760<(P>>>=0))return!1;for(var F=1;4>=F;F*=2){var R=D*(1+.2/F);R=Math.min(R,P+100663296);var U=Math;R=Math.max(P,R),U=U.min.call(U,4294901760,R+(65536-R%65536)%65536);e:{try{I.grow(U-$.byteLength+65535>>>16),ue();var W=1;break e}catch{}W=void 0}if(W)return!0}return!1},va:function(P,D){var F=0;return Pt().forEach(function(R,U){var W=D+F;for(U=H[P+4*U>>2>>>0]=W,W=0;W>0>>>0]=R.charCodeAt(W);B[U>>0>>>0]=0,F+=R.length+1}),0},wa:function(P,D){var F=Pt();H[P>>2>>>0]=F.length;var R=0;return F.forEach(function(U){R+=U.length+1}),H[D>>2>>>0]=R,0},ba:function(P){x||0>2>>>0],te=H[D+4>>2>>>0];D+=8;for(var J=0;J>>0]);U+=te}return H[R>>2>>>0]=U,0},c:function(){return Re},ja:function P(D,F){P.Mb||(P.Mb=function(){if(typeof crypto=="object"&&typeof crypto.getRandomValues=="function"){var U=new Uint8Array(1);return()=>(crypto.getRandomValues(U),U[0])}if(v)try{var W=o(Object(function(){var Y=new Error("Cannot find module 'crypto'");throw Y.code="MODULE_NOT_FOUND",Y}()));return()=>W.randomBytes(1)[0]}catch{}return()=>Ee("randomDevice")}());for(var R=0;R>0>>>0]=P.Mb();return 0},ea:function(P,D,F){var R=ie();try{return ge(P)(D,F)}catch(U){if(se(R),U!==U+0)throw U;ae(1,0)}},fa:function(P,D,F){var R=ie();try{return ge(P)(D,F)}catch(U){if(se(R),U!==U+0)throw U;ae(1,0)}},J:function(P){var D=ie();try{return ge(P)()}catch(F){if(se(D),F!==F+0)throw F;ae(1,0)}},e:function(P,D){var F=ie();try{return ge(P)(D)}catch(R){if(se(F),R!==R+0)throw R;ae(1,0)}},N:function(P,D,F){var R=ie();try{return ge(P)(D,F)}catch(U){if(se(R),U!==U+0)throw U;ae(1,0)}},O:function(P,D,F){var R=ie();try{return ge(P)(D,F)}catch(U){if(se(R),U!==U+0)throw U;ae(1,0)}},j:function(P,D,F){var R=ie();try{return ge(P)(D,F)}catch(U){if(se(R),U!==U+0)throw U;ae(1,0)}},o:function(P,D,F,R){var U=ie();try{return ge(P)(D,F,R)}catch(W){if(se(U),W!==W+0)throw W;ae(1,0)}},p:function(P,D,F,R,U){var W=ie();try{return ge(P)(D,F,R,U)}catch(Y){if(se(W),Y!==Y+0)throw Y;ae(1,0)}},M:function(P,D,F,R,U,W){var Y=ie();try{return ge(P)(D,F,R,U,W)}catch(te){if(se(Y),te!==te+0)throw te;ae(1,0)}},r:function(P,D,F,R,U,W){var Y=ie();try{return ge(P)(D,F,R,U,W)}catch(te){if(se(Y),te!==te+0)throw te;ae(1,0)}},v:function(P,D,F,R,U,W,Y){var te=ie();try{return ge(P)(D,F,R,U,W,Y)}catch(J){if(se(te),J!==J+0)throw J;ae(1,0)}},K:function(P,D,F,R,U,W,Y,te){var J=ie();try{return ge(P)(D,F,R,U,W,Y,te)}catch(ce){if(se(J),ce!==ce+0)throw ce;ae(1,0)}},D:function(P,D,F,R,U,W,Y,te,J,ce,Se,Le){var Fe=ie();try{return ge(P)(D,F,R,U,W,Y,te,J,ce,Se,Le)}catch(G){if(se(Fe),G!==G+0)throw G;ae(1,0)}},X:function(P,D,F,R,U,W,Y,te){var J=ie();try{return At(P,D,F,R,U,W,Y,te)}catch(ce){if(se(J),ce!==ce+0)throw ce;ae(1,0)}},V:function(P,D,F,R,U,W,Y){var te=ie();try{return yt(P,D,F,R,U,W,Y)}catch(J){if(se(te),J!==J+0)throw J;ae(1,0)}},U:function(P,D,F,R,U){var W=ie();try{return Ot(P,D,F,R,U)}catch(Y){if(se(W),Y!==Y+0)throw Y;ae(1,0)}},Z:function(P,D,F,R){var U=ie();try{return Tt(P,D,F,R)}catch(W){if(se(U),W!==W+0)throw W;ae(1,0)}},W:function(P){var D=ie();try{return bt(P)}catch(F){if(se(D),F!==F+0)throw F;ae(1,0)}},Y:function(P,D){var F=ie();try{return St(P,D)}catch(R){if(se(F),R!==R+0)throw R;ae(1,0)}},T:function(P,D,F){var R=ie();try{return _t(P,D,F)}catch(U){if(se(R),U!==U+0)throw U;ae(1,0)}},f:function(P){var D=ie();try{ge(P)()}catch(F){if(se(D),F!==F+0)throw F;ae(1,0)}},q:function(P,D){var F=ie();try{ge(P)(D)}catch(R){if(se(F),R!==R+0)throw R;ae(1,0)}},h:function(P,D,F){var R=ie();try{ge(P)(D,F)}catch(U){if(se(R),U!==U+0)throw U;ae(1,0)}},da:function(P,D,F,R){var U=ie();try{ge(P)(D,F,R)}catch(W){if(se(U),W!==W+0)throw W;ae(1,0)}},l:function(P,D,F,R){var U=ie();try{ge(P)(D,F,R)}catch(W){if(se(U),W!==W+0)throw W;ae(1,0)}},t:function(P,D,F,R,U){var W=ie();try{ge(P)(D,F,R,U)}catch(Y){if(se(W),Y!==Y+0)throw Y;ae(1,0)}},u:function(P,D,F,R,U,W){var Y=ie();try{ge(P)(D,F,R,U,W)}catch(te){if(se(Y),te!==te+0)throw te;ae(1,0)}},x:function(P,D,F,R,U,W,Y){var te=ie();try{ge(P)(D,F,R,U,W,Y)}catch(J){if(se(te),J!==J+0)throw J;ae(1,0)}},z:function(P,D,F,R,U,W,Y,te){var J=ie();try{ge(P)(D,F,R,U,W,Y,te)}catch(ce){if(se(J),ce!==ce+0)throw ce;ae(1,0)}},ga:function(P,D,F,R,U,W,Y,te,J){var ce=ie();try{ge(P)(D,F,R,U,W,Y,te,J)}catch(Se){if(se(ce),Se!==Se+0)throw Se;ae(1,0)}},A:function(P,D,F,R,U,W,Y,te,J,ce,Se){var Le=ie();try{ge(P)(D,F,R,U,W,Y,te,J,ce,Se)}catch(Fe){if(se(Le),Fe!==Fe+0)throw Fe;ae(1,0)}},C:function(P,D,F,R,U,W,Y,te,J,ce,Se,Le,Fe,G,be,Pe){var We=ie();try{ge(P)(D,F,R,U,W,Y,te,J,ce,Se,Le,Fe,G,be,Pe)}catch(et){if(se(We),et!==et+0)throw et;ae(1,0)}},aa:function(P,D,F,R,U,W,Y,te){var J=ie();try{wt(P,D,F,R,U,W,Y,te)}catch(ce){if(se(J),ce!==ce+0)throw ce;ae(1,0)}},_:function(P,D,F,R,U,W,Y,te,J,ce,Se,Le){var Fe=ie();try{xt(P,D,F,R,U,W,Y,te,J,ce,Se,Le)}catch(G){if(se(Fe),G!==G+0)throw G;ae(1,0)}},$:function(P,D,F,R,U,W){var Y=ie();try{vt(P,D,F,R,U,W)}catch(te){if(se(Y),te!==te+0)throw te;ae(1,0)}},n:function(P){return P},F:function(P){Re=P},ha:It,y:function(P,D,F,R){return It(P,D,F,R)}};(function(){function P(U){a.asm=U.exports,I=a.asm.Ka,ue(),Ae=a.asm.ib,oe.unshift(a.asm.La),$e--,a.monitorRunDependencies&&a.monitorRunDependencies($e),$e==0&&Ce&&(U=Ce,Ce=null,U())}function D(U){P(U.instance)}function F(U){return function(){if(!S&&(b||_)){if(typeof fetch=="function"&&!Te.startsWith("file://"))return fetch(Te,{credentials:"same-origin"}).then(function(W){if(!W.ok)throw"failed to load wasm binary file at '"+Te+"'";return W.arrayBuffer()}).catch(function(){return Ve()});if(s)return new Promise(function(W,Y){s(Te,function(te){W(new Uint8Array(te))},Y)})}return Promise.resolve().then(function(){return Ve()})}().then(function(W){return WebAssembly.instantiate(W,R)}).then(function(W){return W}).then(U,function(W){O("failed to asynchronously prepare wasm: "+W),Ee(W)})}var R={a:kt};if($e++,a.monitorRunDependencies&&a.monitorRunDependencies($e),a.instantiateWasm)try{return a.instantiateWasm(R,P)}catch(U){return O("Module.instantiateWasm callback failed with error: "+U),!1}(S||typeof WebAssembly.instantiateStreaming!="function"||Oe()||Te.startsWith("file://")||v||typeof fetch!="function"?F(D):fetch(Te,{credentials:"same-origin"}).then(function(U){return WebAssembly.instantiateStreaming(U,R).then(D,function(W){return O("wasm streaming compile failed: "+W),O("falling back to ArrayBuffer instantiation"),F(D)})})).catch(p)})(),a.___wasm_call_ctors=function(){return(a.___wasm_call_ctors=a.asm.La).apply(null,arguments)},a._OrtInit=function(){return(a._OrtInit=a.asm.Ma).apply(null,arguments)},a._OrtCreateSessionOptions=function(){return(a._OrtCreateSessionOptions=a.asm.Na).apply(null,arguments)},a._OrtAppendExecutionProvider=function(){return(a._OrtAppendExecutionProvider=a.asm.Oa).apply(null,arguments)},a._OrtAddSessionConfigEntry=function(){return(a._OrtAddSessionConfigEntry=a.asm.Pa).apply(null,arguments)},a._OrtReleaseSessionOptions=function(){return(a._OrtReleaseSessionOptions=a.asm.Qa).apply(null,arguments)},a._OrtCreateSession=function(){return(a._OrtCreateSession=a.asm.Ra).apply(null,arguments)},a._OrtReleaseSession=function(){return(a._OrtReleaseSession=a.asm.Sa).apply(null,arguments)},a._OrtGetInputCount=function(){return(a._OrtGetInputCount=a.asm.Ta).apply(null,arguments)},a._OrtGetOutputCount=function(){return(a._OrtGetOutputCount=a.asm.Ua).apply(null,arguments)},a._OrtGetInputName=function(){return(a._OrtGetInputName=a.asm.Va).apply(null,arguments)},a._OrtGetOutputName=function(){return(a._OrtGetOutputName=a.asm.Wa).apply(null,arguments)},a._OrtFree=function(){return(a._OrtFree=a.asm.Xa).apply(null,arguments)},a._OrtCreateTensor=function(){return(a._OrtCreateTensor=a.asm.Ya).apply(null,arguments)},a._OrtGetTensorData=function(){return(a._OrtGetTensorData=a.asm.Za).apply(null,arguments)},a._OrtReleaseTensor=function(){return(a._OrtReleaseTensor=a.asm._a).apply(null,arguments)},a._OrtCreateRunOptions=function(){return(a._OrtCreateRunOptions=a.asm.$a).apply(null,arguments)},a._OrtAddRunConfigEntry=function(){return(a._OrtAddRunConfigEntry=a.asm.ab).apply(null,arguments)},a._OrtReleaseRunOptions=function(){return(a._OrtReleaseRunOptions=a.asm.bb).apply(null,arguments)},a._OrtRun=function(){return(a._OrtRun=a.asm.cb).apply(null,arguments)},a._OrtEndProfiling=function(){return(a._OrtEndProfiling=a.asm.db).apply(null,arguments)};var Qe,ve=a._malloc=function(){return(ve=a._malloc=a.asm.eb).apply(null,arguments)},ot=a._free=function(){return(ot=a._free=a.asm.fb).apply(null,arguments)},pt=a._fflush=function(){return(pt=a._fflush=a.asm.gb).apply(null,arguments)},st=a.___funcs_on_exit=function(){return(st=a.___funcs_on_exit=a.asm.hb).apply(null,arguments)},ae=a._setThrew=function(){return(ae=a._setThrew=a.asm.jb).apply(null,arguments)},ie=a.stackSave=function(){return(ie=a.stackSave=a.asm.kb).apply(null,arguments)},se=a.stackRestore=function(){return(se=a.stackRestore=a.asm.lb).apply(null,arguments)},gt=a.stackAlloc=function(){return(gt=a.stackAlloc=a.asm.mb).apply(null,arguments)},at=a.___cxa_can_catch=function(){return(at=a.___cxa_can_catch=a.asm.nb).apply(null,arguments)},mt=a.___cxa_is_pointer_type=function(){return(mt=a.___cxa_is_pointer_type=a.asm.ob).apply(null,arguments)},bt=a.dynCall_j=function(){return(bt=a.dynCall_j=a.asm.pb).apply(null,arguments)},yt=a.dynCall_iiiiij=function(){return(yt=a.dynCall_iiiiij=a.asm.qb).apply(null,arguments)},_t=a.dynCall_jii=function(){return(_t=a.dynCall_jii=a.asm.rb).apply(null,arguments)},wt=a.dynCall_viiiiij=function(){return(wt=a.dynCall_viiiiij=a.asm.sb).apply(null,arguments)},vt=a.dynCall_vjji=function(){return(vt=a.dynCall_vjji=a.asm.tb).apply(null,arguments)},xt=a.dynCall_viiijjjii=function(){return(xt=a.dynCall_viiijjjii=a.asm.ub).apply(null,arguments)},Tt=a.dynCall_iij=function(){return(Tt=a.dynCall_iij=a.asm.vb).apply(null,arguments)},St=a.dynCall_ji=function(){return(St=a.dynCall_ji=a.asm.wb).apply(null,arguments)},At=a.dynCall_iiiiiij=function(){return(At=a.dynCall_iiiiiij=a.asm.xb).apply(null,arguments)},Ot=a.dynCall_iiij=function(){return(Ot=a.dynCall_iiij=a.asm.yb).apply(null,arguments)};function Et(){function P(){if(!Qe&&(Qe=!0,a.calledRun=!0,!M)){if(Xe(oe),h(a),a.onRuntimeInitialized&&a.onRuntimeInitialized(),a.postRun)for(typeof a.postRun=="function"&&(a.postRun=[a.postRun]);a.postRun.length;){var D=a.postRun.shift();ye.unshift(D)}Xe(ye)}}if(!(0<$e)){if(a.preRun)for(typeof a.preRun=="function"&&(a.preRun=[a.preRun]);a.preRun.length;)Ne();Xe(xe),0<$e||(a.setStatus?(a.setStatus("Running..."),setTimeout(function(){setTimeout(function(){a.setStatus("")},1),P()},1)):P())}}if(a.UTF8ToString=X,a.stringToUTF8=function(P,D,F){return Q(P,L,D,F)},a.lengthBytesUTF8=ee,a.stackSave=ie,a.stackRestore=se,a.stackAlloc=gt,Ce=function P(){Qe||Et(),Qe||(Ce=P)},a.preInit)for(typeof a.preInit=="function"&&(a.preInit=[a.preInit]);0{y.exports=function(n,o){for(var l=new Array(arguments.length-1),c=0,f=2,a=!0;f{var o=n;o.length=function(h){var p=h.length;if(!p)return 0;for(var u=0;--p%4>1&&h.charAt(p)==="=";)++u;return Math.ceil(3*h.length)/4-u};for(var l=new Array(64),c=new Array(123),f=0;f<64;)c[l[f]=f<26?f+65:f<52?f+71:f<62?f-4:f-59|43]=f++;o.encode=function(h,p,u){for(var s,t=null,e=[],r=0,i=0;p>2],s=(3&d)<<4,i=1;break;case 1:e[r++]=l[s|d>>4],s=(15&d)<<2,i=2;break;case 2:e[r++]=l[s|d>>6],e[r++]=l[63&d],i=0}r>8191&&((t||(t=[])).push(String.fromCharCode.apply(String,e)),r=0)}return i&&(e[r++]=l[s],e[r++]=61,i===1&&(e[r++]=61)),t?(r&&t.push(String.fromCharCode.apply(String,e.slice(0,r))),t.join("")):String.fromCharCode.apply(String,e.slice(0,r))};var a="invalid encoding";o.decode=function(h,p,u){for(var s,t=u,e=0,r=0;r1)break;if((i=c[i])===void 0)throw Error(a);switch(e){case 0:s=i,e=1;break;case 1:p[u++]=s<<2|(48&i)>>4,s=i,e=2;break;case 2:p[u++]=(15&s)<<4|(60&i)>>2,s=i,e=3;break;case 3:p[u++]=(3&s)<<6|i,e=0}}if(e===1)throw Error(a);return u-t},o.test=function(h){return/^(?:[A-Za-z0-9+/]{4})*(?:[A-Za-z0-9+/]{2}==|[A-Za-z0-9+/]{3}=)?$/.test(h)}},9211:y=>{function n(){this._listeners={}}y.exports=n,n.prototype.on=function(o,l,c){return(this._listeners[o]||(this._listeners[o]=[])).push({fn:l,ctx:c||this}),this},n.prototype.off=function(o,l){if(o===void 0)this._listeners={};else if(l===void 0)this._listeners[o]=[];else for(var c=this._listeners[o],f=0;f{function n(a){return typeof Float32Array<"u"?function(){var h=new Float32Array([-0]),p=new Uint8Array(h.buffer),u=p[3]===128;function s(i,d,g){h[0]=i,d[g]=p[0],d[g+1]=p[1],d[g+2]=p[2],d[g+3]=p[3]}function t(i,d,g){h[0]=i,d[g]=p[3],d[g+1]=p[2],d[g+2]=p[1],d[g+3]=p[0]}function e(i,d){return p[0]=i[d],p[1]=i[d+1],p[2]=i[d+2],p[3]=i[d+3],h[0]}function r(i,d){return p[3]=i[d],p[2]=i[d+1],p[1]=i[d+2],p[0]=i[d+3],h[0]}a.writeFloatLE=u?s:t,a.writeFloatBE=u?t:s,a.readFloatLE=u?e:r,a.readFloatBE=u?r:e}():function(){function h(u,s,t,e){var r=s<0?1:0;if(r&&(s=-s),s===0)u(1/s>0?0:2147483648,t,e);else if(isNaN(s))u(2143289344,t,e);else if(s>34028234663852886e22)u((r<<31|2139095040)>>>0,t,e);else if(s<11754943508222875e-54)u((r<<31|Math.round(s/1401298464324817e-60))>>>0,t,e);else{var i=Math.floor(Math.log(s)/Math.LN2);u((r<<31|i+127<<23|8388607&Math.round(s*Math.pow(2,-i)*8388608))>>>0,t,e)}}function p(u,s,t){var e=u(s,t),r=2*(e>>31)+1,i=e>>>23&255,d=8388607&e;return i===255?d?NaN:r*(1/0):i===0?1401298464324817e-60*r*d:r*Math.pow(2,i-150)*(d+8388608)}a.writeFloatLE=h.bind(null,o),a.writeFloatBE=h.bind(null,l),a.readFloatLE=p.bind(null,c),a.readFloatBE=p.bind(null,f)}(),typeof Float64Array<"u"?function(){var h=new Float64Array([-0]),p=new Uint8Array(h.buffer),u=p[7]===128;function s(i,d,g){h[0]=i,d[g]=p[0],d[g+1]=p[1],d[g+2]=p[2],d[g+3]=p[3],d[g+4]=p[4],d[g+5]=p[5],d[g+6]=p[6],d[g+7]=p[7]}function t(i,d,g){h[0]=i,d[g]=p[7],d[g+1]=p[6],d[g+2]=p[5],d[g+3]=p[4],d[g+4]=p[3],d[g+5]=p[2],d[g+6]=p[1],d[g+7]=p[0]}function e(i,d){return p[0]=i[d],p[1]=i[d+1],p[2]=i[d+2],p[3]=i[d+3],p[4]=i[d+4],p[5]=i[d+5],p[6]=i[d+6],p[7]=i[d+7],h[0]}function r(i,d){return p[7]=i[d],p[6]=i[d+1],p[5]=i[d+2],p[4]=i[d+3],p[3]=i[d+4],p[2]=i[d+5],p[1]=i[d+6],p[0]=i[d+7],h[0]}a.writeDoubleLE=u?s:t,a.writeDoubleBE=u?t:s,a.readDoubleLE=u?e:r,a.readDoubleBE=u?r:e}():function(){function h(u,s,t,e,r,i){var d=e<0?1:0;if(d&&(e=-e),e===0)u(0,r,i+s),u(1/e>0?0:2147483648,r,i+t);else if(isNaN(e))u(0,r,i+s),u(2146959360,r,i+t);else if(e>17976931348623157e292)u(0,r,i+s),u((d<<31|2146435072)>>>0,r,i+t);else{var g;if(e<22250738585072014e-324)u((g=e/5e-324)>>>0,r,i+s),u((d<<31|g/4294967296)>>>0,r,i+t);else{var m=Math.floor(Math.log(e)/Math.LN2);m===1024&&(m=1023),u(4503599627370496*(g=e*Math.pow(2,-m))>>>0,r,i+s),u((d<<31|m+1023<<20|1048576*g&1048575)>>>0,r,i+t)}}}function p(u,s,t,e,r){var i=u(e,r+s),d=u(e,r+t),g=2*(d>>31)+1,m=d>>>20&2047,b=4294967296*(1048575&d)+i;return m===2047?b?NaN:g*(1/0):m===0?5e-324*g*b:g*Math.pow(2,m-1075)*(b+4503599627370496)}a.writeDoubleLE=h.bind(null,o,0,4),a.writeDoubleBE=h.bind(null,l,4,0),a.readDoubleLE=p.bind(null,c,0,4),a.readDoubleBE=p.bind(null,f,4,0)}(),a}function o(a,h,p){h[p]=255&a,h[p+1]=a>>>8&255,h[p+2]=a>>>16&255,h[p+3]=a>>>24}function l(a,h,p){h[p]=a>>>24,h[p+1]=a>>>16&255,h[p+2]=a>>>8&255,h[p+3]=255&a}function c(a,h){return(a[h]|a[h+1]<<8|a[h+2]<<16|a[h+3]<<24)>>>0}function f(a,h){return(a[h]<<24|a[h+1]<<16|a[h+2]<<8|a[h+3])>>>0}y.exports=n(n)},7199:module=>{function inquire(moduleName){try{var mod=eval("quire".replace(/^/,"re"))(moduleName);if(mod&&(mod.length||Object.keys(mod).length))return mod}catch(y){}return null}module.exports=inquire},6662:y=>{y.exports=function(n,o,l){var c=l||8192,f=c>>>1,a=null,h=c;return function(p){if(p<1||p>f)return n(p);h+p>c&&(a=n(c),h=0);var u=o.call(a,h,h+=p);return 7&h&&(h=1+(7|h)),u}}},4997:(y,n)=>{var o=n;o.length=function(l){for(var c=0,f=0,a=0;a191&&a<224?p[u++]=(31&a)<<6|63&l[c++]:a>239&&a<365?(a=((7&a)<<18|(63&l[c++])<<12|(63&l[c++])<<6|63&l[c++])-65536,p[u++]=55296+(a>>10),p[u++]=56320+(1023&a)):p[u++]=(15&a)<<12|(63&l[c++])<<6|63&l[c++],u>8191&&((h||(h=[])).push(String.fromCharCode.apply(String,p)),u=0);return h?(u&&h.push(String.fromCharCode.apply(String,p.slice(0,u))),h.join("")):String.fromCharCode.apply(String,p.slice(0,u))},o.write=function(l,c,f){for(var a,h,p=f,u=0;u>6|192,c[f++]=63&a|128):(64512&a)==55296&&(64512&(h=l.charCodeAt(u+1)))==56320?(a=65536+((1023&a)<<10)+(1023&h),++u,c[f++]=a>>18|240,c[f++]=a>>12&63|128,c[f++]=a>>6&63|128,c[f++]=63&a|128):(c[f++]=a>>12|224,c[f++]=a>>6&63|128,c[f++]=63&a|128);return f-p}},3442:(y,n)=>{n.__esModule=!0;var o=function(){function l(c){if(!c)throw new TypeError("Invalid argument; `value` has no value.");this.value=l.EMPTY,c&&l.isGuid(c)&&(this.value=c)}return l.isGuid=function(c){var f=c.toString();return c&&(c instanceof l||l.validator.test(f))},l.create=function(){return new l([l.gen(2),l.gen(1),l.gen(1),l.gen(1),l.gen(3)].join("-"))},l.createEmpty=function(){return new l("emptyguid")},l.parse=function(c){return new l(c)},l.raw=function(){return[l.gen(2),l.gen(1),l.gen(1),l.gen(1),l.gen(3)].join("-")},l.gen=function(c){for(var f="",a=0;a{y.exports=o;var n=null;try{n=new WebAssembly.Instance(new WebAssembly.Module(new Uint8Array([0,97,115,109,1,0,0,0,1,13,2,96,0,1,127,96,4,127,127,127,127,1,127,3,7,6,0,1,1,1,1,1,6,6,1,127,1,65,0,11,7,50,6,3,109,117,108,0,1,5,100,105,118,95,115,0,2,5,100,105,118,95,117,0,3,5,114,101,109,95,115,0,4,5,114,101,109,95,117,0,5,8,103,101,116,95,104,105,103,104,0,0,10,191,1,6,4,0,35,0,11,36,1,1,126,32,0,173,32,1,173,66,32,134,132,32,2,173,32,3,173,66,32,134,132,126,34,4,66,32,135,167,36,0,32,4,167,11,36,1,1,126,32,0,173,32,1,173,66,32,134,132,32,2,173,32,3,173,66,32,134,132,127,34,4,66,32,135,167,36,0,32,4,167,11,36,1,1,126,32,0,173,32,1,173,66,32,134,132,32,2,173,32,3,173,66,32,134,132,128,34,4,66,32,135,167,36,0,32,4,167,11,36,1,1,126,32,0,173,32,1,173,66,32,134,132,32,2,173,32,3,173,66,32,134,132,129,34,4,66,32,135,167,36,0,32,4,167,11,36,1,1,126,32,0,173,32,1,173,66,32,134,132,32,2,173,32,3,173,66,32,134,132,130,34,4,66,32,135,167,36,0,32,4,167,11])),{}).exports}catch{}function o(x,I,$){this.low=0|x,this.high=0|I,this.unsigned=!!$}function l(x){return(x&&x.__isLong__)===!0}o.prototype.__isLong__,Object.defineProperty(o.prototype,"__isLong__",{value:!0}),o.isLong=l;var c={},f={};function a(x,I){var $,B,L;return I?(L=0<=(x>>>=0)&&x<256)&&(B=f[x])?B:($=p(x,(0|x)<0?-1:0,!0),L&&(f[x]=$),$):(L=-128<=(x|=0)&&x<128)&&(B=c[x])?B:($=p(x,x<0?-1:0,!1),L&&(c[x]=$),$)}function h(x,I){if(isNaN(x))return I?m:g;if(I){if(x<0)return m;if(x>=r)return S}else{if(x<=-i)return A;if(x+1>=i)return w}return x<0?h(-x,I).neg():p(x%e|0,x/e|0,I)}function p(x,I,$){return new o(x,I,$)}o.fromInt=a,o.fromNumber=h,o.fromBits=p;var u=Math.pow;function s(x,I,$){if(x.length===0)throw Error("empty string");if(x==="NaN"||x==="Infinity"||x==="+Infinity"||x==="-Infinity")return g;if(typeof I=="number"?($=I,I=!1):I=!!I,($=$||10)<2||36<$)throw RangeError("radix");var B;if((B=x.indexOf("-"))>0)throw Error("interior hyphen");if(B===0)return s(x.substring(1),I,$).neg();for(var L=h(u($,8)),N=g,H=0;H>>0:this.low},O.toNumber=function(){return this.unsigned?(this.high>>>0)*e+(this.low>>>0):this.high*e+(this.low>>>0)},O.toString=function(x){if((x=x||10)<2||36>>0).toString(x);if((N=M).isZero())return j+H;for(;j.length<6;)j="0"+j;H=""+j+H}},O.getHighBits=function(){return this.high},O.getHighBitsUnsigned=function(){return this.high>>>0},O.getLowBits=function(){return this.low},O.getLowBitsUnsigned=function(){return this.low>>>0},O.getNumBitsAbs=function(){if(this.isNegative())return this.eq(A)?64:this.neg().getNumBitsAbs();for(var x=this.high!=0?this.high:this.low,I=31;I>0&&!(x&1<=0},O.isOdd=function(){return(1&this.low)==1},O.isEven=function(){return(1&this.low)==0},O.equals=function(x){return l(x)||(x=t(x)),(this.unsigned===x.unsigned||this.high>>>31!=1||x.high>>>31!=1)&&this.high===x.high&&this.low===x.low},O.eq=O.equals,O.notEquals=function(x){return!this.eq(x)},O.neq=O.notEquals,O.ne=O.notEquals,O.lessThan=function(x){return this.comp(x)<0},O.lt=O.lessThan,O.lessThanOrEqual=function(x){return this.comp(x)<=0},O.lte=O.lessThanOrEqual,O.le=O.lessThanOrEqual,O.greaterThan=function(x){return this.comp(x)>0},O.gt=O.greaterThan,O.greaterThanOrEqual=function(x){return this.comp(x)>=0},O.gte=O.greaterThanOrEqual,O.ge=O.greaterThanOrEqual,O.compare=function(x){if(l(x)||(x=t(x)),this.eq(x))return 0;var I=this.isNegative(),$=x.isNegative();return I&&!$?-1:!I&&$?1:this.unsigned?x.high>>>0>this.high>>>0||x.high===this.high&&x.low>>>0>this.low>>>0?-1:1:this.sub(x).isNegative()?-1:1},O.comp=O.compare,O.negate=function(){return!this.unsigned&&this.eq(A)?A:this.not().add(b)},O.neg=O.negate,O.add=function(x){l(x)||(x=t(x));var I=this.high>>>16,$=65535&this.high,B=this.low>>>16,L=65535&this.low,N=x.high>>>16,H=65535&x.high,M=x.low>>>16,j=0,Z=0,X=0,Q=0;return X+=(Q+=L+(65535&x.low))>>>16,Z+=(X+=B+M)>>>16,j+=(Z+=$+H)>>>16,j+=I+N,p((X&=65535)<<16|(Q&=65535),(j&=65535)<<16|(Z&=65535),this.unsigned)},O.subtract=function(x){return l(x)||(x=t(x)),this.add(x.neg())},O.sub=O.subtract,O.multiply=function(x){if(this.isZero())return g;if(l(x)||(x=t(x)),n)return p(n.mul(this.low,this.high,x.low,x.high),n.get_high(),this.unsigned);if(x.isZero())return g;if(this.eq(A))return x.isOdd()?A:g;if(x.eq(A))return this.isOdd()?A:g;if(this.isNegative())return x.isNegative()?this.neg().mul(x.neg()):this.neg().mul(x).neg();if(x.isNegative())return this.mul(x.neg()).neg();if(this.lt(d)&&x.lt(d))return h(this.toNumber()*x.toNumber(),this.unsigned);var I=this.high>>>16,$=65535&this.high,B=this.low>>>16,L=65535&this.low,N=x.high>>>16,H=65535&x.high,M=x.low>>>16,j=65535&x.low,Z=0,X=0,Q=0,ee=0;return Q+=(ee+=L*j)>>>16,X+=(Q+=B*j)>>>16,Q&=65535,X+=(Q+=L*M)>>>16,Z+=(X+=$*j)>>>16,X&=65535,Z+=(X+=B*M)>>>16,X&=65535,Z+=(X+=L*H)>>>16,Z+=I*j+$*M+B*H+L*N,p((Q&=65535)<<16|(ee&=65535),(Z&=65535)<<16|(X&=65535),this.unsigned)},O.mul=O.multiply,O.divide=function(x){if(l(x)||(x=t(x)),x.isZero())throw Error("division by zero");var I,$,B;if(n)return this.unsigned||this.high!==-2147483648||x.low!==-1||x.high!==-1?p((this.unsigned?n.div_u:n.div_s)(this.low,this.high,x.low,x.high),n.get_high(),this.unsigned):this;if(this.isZero())return this.unsigned?m:g;if(this.unsigned){if(x.unsigned||(x=x.toUnsigned()),x.gt(this))return m;if(x.gt(this.shru(1)))return _;B=m}else{if(this.eq(A))return x.eq(b)||x.eq(v)?A:x.eq(A)?b:(I=this.shr(1).div(x).shl(1)).eq(g)?x.isNegative()?b:v:($=this.sub(x.mul(I)),B=I.add($.div(x)));if(x.eq(A))return this.unsigned?m:g;if(this.isNegative())return x.isNegative()?this.neg().div(x.neg()):this.neg().div(x).neg();if(x.isNegative())return this.div(x.neg()).neg();B=g}for($=this;$.gte(x);){I=Math.max(1,Math.floor($.toNumber()/x.toNumber()));for(var L=Math.ceil(Math.log(I)/Math.LN2),N=L<=48?1:u(2,L-48),H=h(I),M=H.mul(x);M.isNegative()||M.gt($);)M=(H=h(I-=N,this.unsigned)).mul(x);H.isZero()&&(H=b),B=B.add(H),$=$.sub(M)}return B},O.div=O.divide,O.modulo=function(x){return l(x)||(x=t(x)),n?p((this.unsigned?n.rem_u:n.rem_s)(this.low,this.high,x.low,x.high),n.get_high(),this.unsigned):this.sub(this.div(x).mul(x))},O.mod=O.modulo,O.rem=O.modulo,O.not=function(){return p(~this.low,~this.high,this.unsigned)},O.and=function(x){return l(x)||(x=t(x)),p(this.low&x.low,this.high&x.high,this.unsigned)},O.or=function(x){return l(x)||(x=t(x)),p(this.low|x.low,this.high|x.high,this.unsigned)},O.xor=function(x){return l(x)||(x=t(x)),p(this.low^x.low,this.high^x.high,this.unsigned)},O.shiftLeft=function(x){return l(x)&&(x=x.toInt()),(x&=63)==0?this:x<32?p(this.low<>>32-x,this.unsigned):p(0,this.low<>>x|this.high<<32-x,this.high>>x,this.unsigned):p(this.high>>x-32,this.high>=0?0:-1,this.unsigned)},O.shr=O.shiftRight,O.shiftRightUnsigned=function(x){if(l(x)&&(x=x.toInt()),(x&=63)==0)return this;var I=this.high;return x<32?p(this.low>>>x|I<<32-x,I>>>x,this.unsigned):p(x===32?I:I>>>x-32,0,this.unsigned)},O.shru=O.shiftRightUnsigned,O.shr_u=O.shiftRightUnsigned,O.toSigned=function(){return this.unsigned?p(this.low,this.high,!1):this},O.toUnsigned=function(){return this.unsigned?this:p(this.low,this.high,!0)},O.toBytes=function(x){return x?this.toBytesLE():this.toBytesBE()},O.toBytesLE=function(){var x=this.high,I=this.low;return[255&I,I>>>8&255,I>>>16&255,I>>>24,255&x,x>>>8&255,x>>>16&255,x>>>24]},O.toBytesBE=function(){var x=this.high,I=this.low;return[x>>>24,x>>>16&255,x>>>8&255,255&x,I>>>24,I>>>16&255,I>>>8&255,255&I]},o.fromBytes=function(x,I,$){return $?o.fromBytesLE(x,I):o.fromBytesBE(x,I)},o.fromBytesLE=function(x,I){return new o(x[0]|x[1]<<8|x[2]<<16|x[3]<<24,x[4]|x[5]<<8|x[6]<<16|x[7]<<24,I)},o.fromBytesBE=function(x,I){return new o(x[4]<<24|x[5]<<16|x[6]<<8|x[7],x[0]<<24|x[1]<<16|x[2]<<8|x[3],I)}},1446:(y,n,o)=>{var l,c,f,a=o(2100),h=a.Reader,p=a.Writer,u=a.util,s=a.roots.default||(a.roots.default={});s.onnx=((f={}).Version=(l={},(c=Object.create(l))[l[0]="_START_VERSION"]=0,c[l[1]="IR_VERSION_2017_10_10"]=1,c[l[2]="IR_VERSION_2017_10_30"]=2,c[l[3]="IR_VERSION_2017_11_3"]=3,c[l[4]="IR_VERSION_2019_1_22"]=4,c[l[5]="IR_VERSION"]=5,c),f.AttributeProto=function(){function t(e){if(this.floats=[],this.ints=[],this.strings=[],this.tensors=[],this.graphs=[],e)for(var r=Object.keys(e),i=0;i>>3){case 1:d.name=e.string();break;case 21:d.refAttrName=e.string();break;case 13:d.docString=e.string();break;case 20:d.type=e.int32();break;case 2:d.f=e.float();break;case 3:d.i=e.int64();break;case 4:d.s=e.bytes();break;case 5:d.t=s.onnx.TensorProto.decode(e,e.uint32());break;case 6:d.g=s.onnx.GraphProto.decode(e,e.uint32());break;case 7:if(d.floats&&d.floats.length||(d.floats=[]),(7&g)==2)for(var m=e.uint32()+e.pos;e.pos>>0,e.i.high>>>0).toNumber())),e.s!=null&&(typeof e.s=="string"?u.base64.decode(e.s,r.s=u.newBuffer(u.base64.length(e.s)),0):e.s.length&&(r.s=e.s)),e.t!=null){if(typeof e.t!="object")throw TypeError(".onnx.AttributeProto.t: object expected");r.t=s.onnx.TensorProto.fromObject(e.t)}if(e.g!=null){if(typeof e.g!="object")throw TypeError(".onnx.AttributeProto.g: object expected");r.g=s.onnx.GraphProto.fromObject(e.g)}if(e.floats){if(!Array.isArray(e.floats))throw TypeError(".onnx.AttributeProto.floats: array expected");r.floats=[];for(var i=0;i>>0,e.ints[i].high>>>0).toNumber())}if(e.strings){if(!Array.isArray(e.strings))throw TypeError(".onnx.AttributeProto.strings: array expected");for(r.strings=[],i=0;i>>0,e.i.high>>>0).toNumber():e.i),e.s!=null&&e.hasOwnProperty("s")&&(i.s=r.bytes===String?u.base64.encode(e.s,0,e.s.length):r.bytes===Array?Array.prototype.slice.call(e.s):e.s),e.t!=null&&e.hasOwnProperty("t")&&(i.t=s.onnx.TensorProto.toObject(e.t,r)),e.g!=null&&e.hasOwnProperty("g")&&(i.g=s.onnx.GraphProto.toObject(e.g,r)),e.floats&&e.floats.length){i.floats=[];for(var g=0;g>>0,e.ints[g].high>>>0).toNumber():e.ints[g];if(e.strings&&e.strings.length)for(i.strings=[],g=0;g>>3){case 1:d.name=e.string();break;case 2:d.type=s.onnx.TypeProto.decode(e,e.uint32());break;case 3:d.docString=e.string();break;default:e.skipType(7&g)}}return d},t.decodeDelimited=function(e){return e instanceof h||(e=new h(e)),this.decode(e,e.uint32())},t.verify=function(e){if(typeof e!="object"||e===null)return"object expected";if(e.name!=null&&e.hasOwnProperty("name")&&!u.isString(e.name))return"name: string expected";if(e.type!=null&&e.hasOwnProperty("type")){var r=s.onnx.TypeProto.verify(e.type);if(r)return"type."+r}return e.docString!=null&&e.hasOwnProperty("docString")&&!u.isString(e.docString)?"docString: string expected":null},t.fromObject=function(e){if(e instanceof s.onnx.ValueInfoProto)return e;var r=new s.onnx.ValueInfoProto;if(e.name!=null&&(r.name=String(e.name)),e.type!=null){if(typeof e.type!="object")throw TypeError(".onnx.ValueInfoProto.type: object expected");r.type=s.onnx.TypeProto.fromObject(e.type)}return e.docString!=null&&(r.docString=String(e.docString)),r},t.toObject=function(e,r){r||(r={});var i={};return r.defaults&&(i.name="",i.type=null,i.docString=""),e.name!=null&&e.hasOwnProperty("name")&&(i.name=e.name),e.type!=null&&e.hasOwnProperty("type")&&(i.type=s.onnx.TypeProto.toObject(e.type,r)),e.docString!=null&&e.hasOwnProperty("docString")&&(i.docString=e.docString),i},t.prototype.toJSON=function(){return this.constructor.toObject(this,a.util.toJSONOptions)},t}(),f.NodeProto=function(){function t(e){if(this.input=[],this.output=[],this.attribute=[],e)for(var r=Object.keys(e),i=0;i>>3){case 1:d.input&&d.input.length||(d.input=[]),d.input.push(e.string());break;case 2:d.output&&d.output.length||(d.output=[]),d.output.push(e.string());break;case 3:d.name=e.string();break;case 4:d.opType=e.string();break;case 7:d.domain=e.string();break;case 5:d.attribute&&d.attribute.length||(d.attribute=[]),d.attribute.push(s.onnx.AttributeProto.decode(e,e.uint32()));break;case 6:d.docString=e.string();break;default:e.skipType(7&g)}}return d},t.decodeDelimited=function(e){return e instanceof h||(e=new h(e)),this.decode(e,e.uint32())},t.verify=function(e){if(typeof e!="object"||e===null)return"object expected";if(e.input!=null&&e.hasOwnProperty("input")){if(!Array.isArray(e.input))return"input: array expected";for(var r=0;r>>3){case 1:d.irVersion=e.int64();break;case 8:d.opsetImport&&d.opsetImport.length||(d.opsetImport=[]),d.opsetImport.push(s.onnx.OperatorSetIdProto.decode(e,e.uint32()));break;case 2:d.producerName=e.string();break;case 3:d.producerVersion=e.string();break;case 4:d.domain=e.string();break;case 5:d.modelVersion=e.int64();break;case 6:d.docString=e.string();break;case 7:d.graph=s.onnx.GraphProto.decode(e,e.uint32());break;case 14:d.metadataProps&&d.metadataProps.length||(d.metadataProps=[]),d.metadataProps.push(s.onnx.StringStringEntryProto.decode(e,e.uint32()));break;default:e.skipType(7&g)}}return d},t.decodeDelimited=function(e){return e instanceof h||(e=new h(e)),this.decode(e,e.uint32())},t.verify=function(e){if(typeof e!="object"||e===null)return"object expected";if(e.irVersion!=null&&e.hasOwnProperty("irVersion")&&!(u.isInteger(e.irVersion)||e.irVersion&&u.isInteger(e.irVersion.low)&&u.isInteger(e.irVersion.high)))return"irVersion: integer|Long expected";if(e.opsetImport!=null&&e.hasOwnProperty("opsetImport")){if(!Array.isArray(e.opsetImport))return"opsetImport: array expected";for(var r=0;r>>0,e.irVersion.high>>>0).toNumber())),e.opsetImport){if(!Array.isArray(e.opsetImport))throw TypeError(".onnx.ModelProto.opsetImport: array expected");r.opsetImport=[];for(var i=0;i>>0,e.modelVersion.high>>>0).toNumber())),e.docString!=null&&(r.docString=String(e.docString)),e.graph!=null){if(typeof e.graph!="object")throw TypeError(".onnx.ModelProto.graph: object expected");r.graph=s.onnx.GraphProto.fromObject(e.graph)}if(e.metadataProps){if(!Array.isArray(e.metadataProps))throw TypeError(".onnx.ModelProto.metadataProps: array expected");for(r.metadataProps=[],i=0;i>>0,e.irVersion.high>>>0).toNumber():e.irVersion),e.producerName!=null&&e.hasOwnProperty("producerName")&&(i.producerName=e.producerName),e.producerVersion!=null&&e.hasOwnProperty("producerVersion")&&(i.producerVersion=e.producerVersion),e.domain!=null&&e.hasOwnProperty("domain")&&(i.domain=e.domain),e.modelVersion!=null&&e.hasOwnProperty("modelVersion")&&(typeof e.modelVersion=="number"?i.modelVersion=r.longs===String?String(e.modelVersion):e.modelVersion:i.modelVersion=r.longs===String?u.Long.prototype.toString.call(e.modelVersion):r.longs===Number?new u.LongBits(e.modelVersion.low>>>0,e.modelVersion.high>>>0).toNumber():e.modelVersion),e.docString!=null&&e.hasOwnProperty("docString")&&(i.docString=e.docString),e.graph!=null&&e.hasOwnProperty("graph")&&(i.graph=s.onnx.GraphProto.toObject(e.graph,r)),e.opsetImport&&e.opsetImport.length){i.opsetImport=[];for(var g=0;g>>3){case 1:d.key=e.string();break;case 2:d.value=e.string();break;default:e.skipType(7&g)}}return d},t.decodeDelimited=function(e){return e instanceof h||(e=new h(e)),this.decode(e,e.uint32())},t.verify=function(e){return typeof e!="object"||e===null?"object expected":e.key!=null&&e.hasOwnProperty("key")&&!u.isString(e.key)?"key: string expected":e.value!=null&&e.hasOwnProperty("value")&&!u.isString(e.value)?"value: string expected":null},t.fromObject=function(e){if(e instanceof s.onnx.StringStringEntryProto)return e;var r=new s.onnx.StringStringEntryProto;return e.key!=null&&(r.key=String(e.key)),e.value!=null&&(r.value=String(e.value)),r},t.toObject=function(e,r){r||(r={});var i={};return r.defaults&&(i.key="",i.value=""),e.key!=null&&e.hasOwnProperty("key")&&(i.key=e.key),e.value!=null&&e.hasOwnProperty("value")&&(i.value=e.value),i},t.prototype.toJSON=function(){return this.constructor.toObject(this,a.util.toJSONOptions)},t}(),f.TensorAnnotation=function(){function t(e){if(this.quantParameterTensorNames=[],e)for(var r=Object.keys(e),i=0;i>>3){case 1:d.tensorName=e.string();break;case 2:d.quantParameterTensorNames&&d.quantParameterTensorNames.length||(d.quantParameterTensorNames=[]),d.quantParameterTensorNames.push(s.onnx.StringStringEntryProto.decode(e,e.uint32()));break;default:e.skipType(7&g)}}return d},t.decodeDelimited=function(e){return e instanceof h||(e=new h(e)),this.decode(e,e.uint32())},t.verify=function(e){if(typeof e!="object"||e===null)return"object expected";if(e.tensorName!=null&&e.hasOwnProperty("tensorName")&&!u.isString(e.tensorName))return"tensorName: string expected";if(e.quantParameterTensorNames!=null&&e.hasOwnProperty("quantParameterTensorNames")){if(!Array.isArray(e.quantParameterTensorNames))return"quantParameterTensorNames: array expected";for(var r=0;r>>3){case 1:d.node&&d.node.length||(d.node=[]),d.node.push(s.onnx.NodeProto.decode(e,e.uint32()));break;case 2:d.name=e.string();break;case 5:d.initializer&&d.initializer.length||(d.initializer=[]),d.initializer.push(s.onnx.TensorProto.decode(e,e.uint32()));break;case 10:d.docString=e.string();break;case 11:d.input&&d.input.length||(d.input=[]),d.input.push(s.onnx.ValueInfoProto.decode(e,e.uint32()));break;case 12:d.output&&d.output.length||(d.output=[]),d.output.push(s.onnx.ValueInfoProto.decode(e,e.uint32()));break;case 13:d.valueInfo&&d.valueInfo.length||(d.valueInfo=[]),d.valueInfo.push(s.onnx.ValueInfoProto.decode(e,e.uint32()));break;case 14:d.quantizationAnnotation&&d.quantizationAnnotation.length||(d.quantizationAnnotation=[]),d.quantizationAnnotation.push(s.onnx.TensorAnnotation.decode(e,e.uint32()));break;default:e.skipType(7&g)}}return d},t.decodeDelimited=function(e){return e instanceof h||(e=new h(e)),this.decode(e,e.uint32())},t.verify=function(e){if(typeof e!="object"||e===null)return"object expected";if(e.node!=null&&e.hasOwnProperty("node")){if(!Array.isArray(e.node))return"node: array expected";for(var r=0;r>>3){case 1:if(d.dims&&d.dims.length||(d.dims=[]),(7&g)==2)for(var m=e.uint32()+e.pos;e.pos>>0,e.dims[i].high>>>0).toNumber())}if(e.dataType!=null&&(r.dataType=0|e.dataType),e.segment!=null){if(typeof e.segment!="object")throw TypeError(".onnx.TensorProto.segment: object expected");r.segment=s.onnx.TensorProto.Segment.fromObject(e.segment)}if(e.floatData){if(!Array.isArray(e.floatData))throw TypeError(".onnx.TensorProto.floatData: array expected");for(r.floatData=[],i=0;i>>0,e.int64Data[i].high>>>0).toNumber())}if(e.name!=null&&(r.name=String(e.name)),e.docString!=null&&(r.docString=String(e.docString)),e.rawData!=null&&(typeof e.rawData=="string"?u.base64.decode(e.rawData,r.rawData=u.newBuffer(u.base64.length(e.rawData)),0):e.rawData.length&&(r.rawData=e.rawData)),e.externalData){if(!Array.isArray(e.externalData))throw TypeError(".onnx.TensorProto.externalData: array expected");for(r.externalData=[],i=0;i>>0,e.uint64Data[i].high>>>0).toNumber(!0))}return r},t.toObject=function(e,r){r||(r={});var i={};if((r.arrays||r.defaults)&&(i.dims=[],i.floatData=[],i.int32Data=[],i.stringData=[],i.int64Data=[],i.doubleData=[],i.uint64Data=[],i.externalData=[]),r.defaults&&(i.dataType=0,i.segment=null,i.name="",r.bytes===String?i.rawData="":(i.rawData=[],r.bytes!==Array&&(i.rawData=u.newBuffer(i.rawData))),i.docString="",i.dataLocation=r.enums===String?"DEFAULT":0),e.dims&&e.dims.length){i.dims=[];for(var d=0;d>>0,e.dims[d].high>>>0).toNumber():e.dims[d]}if(e.dataType!=null&&e.hasOwnProperty("dataType")&&(i.dataType=e.dataType),e.segment!=null&&e.hasOwnProperty("segment")&&(i.segment=s.onnx.TensorProto.Segment.toObject(e.segment,r)),e.floatData&&e.floatData.length)for(i.floatData=[],d=0;d>>0,e.int64Data[d].high>>>0).toNumber():e.int64Data[d];if(e.name!=null&&e.hasOwnProperty("name")&&(i.name=e.name),e.rawData!=null&&e.hasOwnProperty("rawData")&&(i.rawData=r.bytes===String?u.base64.encode(e.rawData,0,e.rawData.length):r.bytes===Array?Array.prototype.slice.call(e.rawData):e.rawData),e.doubleData&&e.doubleData.length)for(i.doubleData=[],d=0;d>>0,e.uint64Data[d].high>>>0).toNumber(!0):e.uint64Data[d];if(e.docString!=null&&e.hasOwnProperty("docString")&&(i.docString=e.docString),e.externalData&&e.externalData.length)for(i.externalData=[],d=0;d>>3){case 1:g.begin=r.int64();break;case 2:g.end=r.int64();break;default:r.skipType(7&m)}}return g},e.decodeDelimited=function(r){return r instanceof h||(r=new h(r)),this.decode(r,r.uint32())},e.verify=function(r){return typeof r!="object"||r===null?"object expected":r.begin!=null&&r.hasOwnProperty("begin")&&!(u.isInteger(r.begin)||r.begin&&u.isInteger(r.begin.low)&&u.isInteger(r.begin.high))?"begin: integer|Long expected":r.end!=null&&r.hasOwnProperty("end")&&!(u.isInteger(r.end)||r.end&&u.isInteger(r.end.low)&&u.isInteger(r.end.high))?"end: integer|Long expected":null},e.fromObject=function(r){if(r instanceof s.onnx.TensorProto.Segment)return r;var i=new s.onnx.TensorProto.Segment;return r.begin!=null&&(u.Long?(i.begin=u.Long.fromValue(r.begin)).unsigned=!1:typeof r.begin=="string"?i.begin=parseInt(r.begin,10):typeof r.begin=="number"?i.begin=r.begin:typeof r.begin=="object"&&(i.begin=new u.LongBits(r.begin.low>>>0,r.begin.high>>>0).toNumber())),r.end!=null&&(u.Long?(i.end=u.Long.fromValue(r.end)).unsigned=!1:typeof r.end=="string"?i.end=parseInt(r.end,10):typeof r.end=="number"?i.end=r.end:typeof r.end=="object"&&(i.end=new u.LongBits(r.end.low>>>0,r.end.high>>>0).toNumber())),i},e.toObject=function(r,i){i||(i={});var d={};if(i.defaults){if(u.Long){var g=new u.Long(0,0,!1);d.begin=i.longs===String?g.toString():i.longs===Number?g.toNumber():g}else d.begin=i.longs===String?"0":0;u.Long?(g=new u.Long(0,0,!1),d.end=i.longs===String?g.toString():i.longs===Number?g.toNumber():g):d.end=i.longs===String?"0":0}return r.begin!=null&&r.hasOwnProperty("begin")&&(typeof r.begin=="number"?d.begin=i.longs===String?String(r.begin):r.begin:d.begin=i.longs===String?u.Long.prototype.toString.call(r.begin):i.longs===Number?new u.LongBits(r.begin.low>>>0,r.begin.high>>>0).toNumber():r.begin),r.end!=null&&r.hasOwnProperty("end")&&(typeof r.end=="number"?d.end=i.longs===String?String(r.end):r.end:d.end=i.longs===String?u.Long.prototype.toString.call(r.end):i.longs===Number?new u.LongBits(r.end.low>>>0,r.end.high>>>0).toNumber():r.end),d},e.prototype.toJSON=function(){return this.constructor.toObject(this,a.util.toJSONOptions)},e}(),t.DataLocation=function(){var e={},r=Object.create(e);return r[e[0]="DEFAULT"]=0,r[e[1]="EXTERNAL"]=1,r}(),t}(),f.TensorShapeProto=function(){function t(e){if(this.dim=[],e)for(var r=Object.keys(e),i=0;i>>3==1?(d.dim&&d.dim.length||(d.dim=[]),d.dim.push(s.onnx.TensorShapeProto.Dimension.decode(e,e.uint32()))):e.skipType(7&g)}return d},t.decodeDelimited=function(e){return e instanceof h||(e=new h(e)),this.decode(e,e.uint32())},t.verify=function(e){if(typeof e!="object"||e===null)return"object expected";if(e.dim!=null&&e.hasOwnProperty("dim")){if(!Array.isArray(e.dim))return"dim: array expected";for(var r=0;r>>3){case 1:m.dimValue=i.int64();break;case 2:m.dimParam=i.string();break;case 3:m.denotation=i.string();break;default:i.skipType(7&b)}}return m},e.decodeDelimited=function(i){return i instanceof h||(i=new h(i)),this.decode(i,i.uint32())},e.verify=function(i){if(typeof i!="object"||i===null)return"object expected";var d={};if(i.dimValue!=null&&i.hasOwnProperty("dimValue")&&(d.value=1,!(u.isInteger(i.dimValue)||i.dimValue&&u.isInteger(i.dimValue.low)&&u.isInteger(i.dimValue.high))))return"dimValue: integer|Long expected";if(i.dimParam!=null&&i.hasOwnProperty("dimParam")){if(d.value===1)return"value: multiple values";if(d.value=1,!u.isString(i.dimParam))return"dimParam: string expected"}return i.denotation!=null&&i.hasOwnProperty("denotation")&&!u.isString(i.denotation)?"denotation: string expected":null},e.fromObject=function(i){if(i instanceof s.onnx.TensorShapeProto.Dimension)return i;var d=new s.onnx.TensorShapeProto.Dimension;return i.dimValue!=null&&(u.Long?(d.dimValue=u.Long.fromValue(i.dimValue)).unsigned=!1:typeof i.dimValue=="string"?d.dimValue=parseInt(i.dimValue,10):typeof i.dimValue=="number"?d.dimValue=i.dimValue:typeof i.dimValue=="object"&&(d.dimValue=new u.LongBits(i.dimValue.low>>>0,i.dimValue.high>>>0).toNumber())),i.dimParam!=null&&(d.dimParam=String(i.dimParam)),i.denotation!=null&&(d.denotation=String(i.denotation)),d},e.toObject=function(i,d){d||(d={});var g={};return d.defaults&&(g.denotation=""),i.dimValue!=null&&i.hasOwnProperty("dimValue")&&(typeof i.dimValue=="number"?g.dimValue=d.longs===String?String(i.dimValue):i.dimValue:g.dimValue=d.longs===String?u.Long.prototype.toString.call(i.dimValue):d.longs===Number?new u.LongBits(i.dimValue.low>>>0,i.dimValue.high>>>0).toNumber():i.dimValue,d.oneofs&&(g.value="dimValue")),i.dimParam!=null&&i.hasOwnProperty("dimParam")&&(g.dimParam=i.dimParam,d.oneofs&&(g.value="dimParam")),i.denotation!=null&&i.hasOwnProperty("denotation")&&(g.denotation=i.denotation),g},e.prototype.toJSON=function(){return this.constructor.toObject(this,a.util.toJSONOptions)},e}(),t}(),f.TypeProto=function(){function t(r){if(r)for(var i=Object.keys(r),d=0;d>>3){case 1:g.tensorType=s.onnx.TypeProto.Tensor.decode(r,r.uint32());break;case 6:g.denotation=r.string();break;default:r.skipType(7&m)}}return g},t.decodeDelimited=function(r){return r instanceof h||(r=new h(r)),this.decode(r,r.uint32())},t.verify=function(r){if(typeof r!="object"||r===null)return"object expected";if(r.tensorType!=null&&r.hasOwnProperty("tensorType")){var i=s.onnx.TypeProto.Tensor.verify(r.tensorType);if(i)return"tensorType."+i}return r.denotation!=null&&r.hasOwnProperty("denotation")&&!u.isString(r.denotation)?"denotation: string expected":null},t.fromObject=function(r){if(r instanceof s.onnx.TypeProto)return r;var i=new s.onnx.TypeProto;if(r.tensorType!=null){if(typeof r.tensorType!="object")throw TypeError(".onnx.TypeProto.tensorType: object expected");i.tensorType=s.onnx.TypeProto.Tensor.fromObject(r.tensorType)}return r.denotation!=null&&(i.denotation=String(r.denotation)),i},t.toObject=function(r,i){i||(i={});var d={};return i.defaults&&(d.denotation=""),r.tensorType!=null&&r.hasOwnProperty("tensorType")&&(d.tensorType=s.onnx.TypeProto.Tensor.toObject(r.tensorType,i),i.oneofs&&(d.value="tensorType")),r.denotation!=null&&r.hasOwnProperty("denotation")&&(d.denotation=r.denotation),d},t.prototype.toJSON=function(){return this.constructor.toObject(this,a.util.toJSONOptions)},t.Tensor=function(){function r(i){if(i)for(var d=Object.keys(i),g=0;g>>3){case 1:m.elemType=i.int32();break;case 2:m.shape=s.onnx.TensorShapeProto.decode(i,i.uint32());break;default:i.skipType(7&b)}}return m},r.decodeDelimited=function(i){return i instanceof h||(i=new h(i)),this.decode(i,i.uint32())},r.verify=function(i){if(typeof i!="object"||i===null)return"object expected";if(i.elemType!=null&&i.hasOwnProperty("elemType")&&!u.isInteger(i.elemType))return"elemType: integer expected";if(i.shape!=null&&i.hasOwnProperty("shape")){var d=s.onnx.TensorShapeProto.verify(i.shape);if(d)return"shape."+d}return null},r.fromObject=function(i){if(i instanceof s.onnx.TypeProto.Tensor)return i;var d=new s.onnx.TypeProto.Tensor;if(i.elemType!=null&&(d.elemType=0|i.elemType),i.shape!=null){if(typeof i.shape!="object")throw TypeError(".onnx.TypeProto.Tensor.shape: object expected");d.shape=s.onnx.TensorShapeProto.fromObject(i.shape)}return d},r.toObject=function(i,d){d||(d={});var g={};return d.defaults&&(g.elemType=0,g.shape=null),i.elemType!=null&&i.hasOwnProperty("elemType")&&(g.elemType=i.elemType),i.shape!=null&&i.hasOwnProperty("shape")&&(g.shape=s.onnx.TensorShapeProto.toObject(i.shape,d)),g},r.prototype.toJSON=function(){return this.constructor.toObject(this,a.util.toJSONOptions)},r}(),t}(),f.OperatorSetIdProto=function(){function t(e){if(e)for(var r=Object.keys(e),i=0;i>>3){case 1:d.domain=e.string();break;case 2:d.version=e.int64();break;default:e.skipType(7&g)}}return d},t.decodeDelimited=function(e){return e instanceof h||(e=new h(e)),this.decode(e,e.uint32())},t.verify=function(e){return typeof e!="object"||e===null?"object expected":e.domain!=null&&e.hasOwnProperty("domain")&&!u.isString(e.domain)?"domain: string expected":e.version!=null&&e.hasOwnProperty("version")&&!(u.isInteger(e.version)||e.version&&u.isInteger(e.version.low)&&u.isInteger(e.version.high))?"version: integer|Long expected":null},t.fromObject=function(e){if(e instanceof s.onnx.OperatorSetIdProto)return e;var r=new s.onnx.OperatorSetIdProto;return e.domain!=null&&(r.domain=String(e.domain)),e.version!=null&&(u.Long?(r.version=u.Long.fromValue(e.version)).unsigned=!1:typeof e.version=="string"?r.version=parseInt(e.version,10):typeof e.version=="number"?r.version=e.version:typeof e.version=="object"&&(r.version=new u.LongBits(e.version.low>>>0,e.version.high>>>0).toNumber())),r},t.toObject=function(e,r){r||(r={});var i={};if(r.defaults)if(i.domain="",u.Long){var d=new u.Long(0,0,!1);i.version=r.longs===String?d.toString():r.longs===Number?d.toNumber():d}else i.version=r.longs===String?"0":0;return e.domain!=null&&e.hasOwnProperty("domain")&&(i.domain=e.domain),e.version!=null&&e.hasOwnProperty("version")&&(typeof e.version=="number"?i.version=r.longs===String?String(e.version):e.version:i.version=r.longs===String?u.Long.prototype.toString.call(e.version):r.longs===Number?new u.LongBits(e.version.low>>>0,e.version.high>>>0).toNumber():e.version),i},t.prototype.toJSON=function(){return this.constructor.toObject(this,a.util.toJSONOptions)},t}(),f),y.exports=s},2100:(y,n,o)=>{y.exports=o(9482)},9482:(y,n,o)=>{var l=n;function c(){l.util._configure(),l.Writer._configure(l.BufferWriter),l.Reader._configure(l.BufferReader)}l.build="minimal",l.Writer=o(1173),l.BufferWriter=o(3155),l.Reader=o(1408),l.BufferReader=o(593),l.util=o(9693),l.rpc=o(5994),l.roots=o(5054),l.configure=c,c()},1408:(y,n,o)=>{y.exports=p;var l,c=o(9693),f=c.LongBits,a=c.utf8;function h(d,g){return RangeError("index out of range: "+d.pos+" + "+(g||1)+" > "+d.len)}function p(d){this.buf=d,this.pos=0,this.len=d.length}var u,s=typeof Uint8Array<"u"?function(d){if(d instanceof Uint8Array||Array.isArray(d))return new p(d);throw Error("illegal buffer")}:function(d){if(Array.isArray(d))return new p(d);throw Error("illegal buffer")},t=function(){return c.Buffer?function(d){return(p.create=function(g){return c.Buffer.isBuffer(g)?new l(g):s(g)})(d)}:s};function e(){var d=new f(0,0),g=0;if(!(this.len-this.pos>4)){for(;g<3;++g){if(this.pos>=this.len)throw h(this);if(d.lo=(d.lo|(127&this.buf[this.pos])<<7*g)>>>0,this.buf[this.pos++]<128)return d}return d.lo=(d.lo|(127&this.buf[this.pos++])<<7*g)>>>0,d}for(;g<4;++g)if(d.lo=(d.lo|(127&this.buf[this.pos])<<7*g)>>>0,this.buf[this.pos++]<128)return d;if(d.lo=(d.lo|(127&this.buf[this.pos])<<28)>>>0,d.hi=(d.hi|(127&this.buf[this.pos])>>4)>>>0,this.buf[this.pos++]<128)return d;if(g=0,this.len-this.pos>4){for(;g<5;++g)if(d.hi=(d.hi|(127&this.buf[this.pos])<<7*g+3)>>>0,this.buf[this.pos++]<128)return d}else for(;g<5;++g){if(this.pos>=this.len)throw h(this);if(d.hi=(d.hi|(127&this.buf[this.pos])<<7*g+3)>>>0,this.buf[this.pos++]<128)return d}throw Error("invalid varint encoding")}function r(d,g){return(d[g-4]|d[g-3]<<8|d[g-2]<<16|d[g-1]<<24)>>>0}function i(){if(this.pos+8>this.len)throw h(this,8);return new f(r(this.buf,this.pos+=4),r(this.buf,this.pos+=4))}p.create=t(),p.prototype._slice=c.Array.prototype.subarray||c.Array.prototype.slice,p.prototype.uint32=(u=4294967295,function(){if(u=(127&this.buf[this.pos])>>>0,this.buf[this.pos++]<128||(u=(u|(127&this.buf[this.pos])<<7)>>>0,this.buf[this.pos++]<128)||(u=(u|(127&this.buf[this.pos])<<14)>>>0,this.buf[this.pos++]<128)||(u=(u|(127&this.buf[this.pos])<<21)>>>0,this.buf[this.pos++]<128)||(u=(u|(15&this.buf[this.pos])<<28)>>>0,this.buf[this.pos++]<128))return u;if((this.pos+=5)>this.len)throw this.pos=this.len,h(this,10);return u}),p.prototype.int32=function(){return 0|this.uint32()},p.prototype.sint32=function(){var d=this.uint32();return d>>>1^-(1&d)|0},p.prototype.bool=function(){return this.uint32()!==0},p.prototype.fixed32=function(){if(this.pos+4>this.len)throw h(this,4);return r(this.buf,this.pos+=4)},p.prototype.sfixed32=function(){if(this.pos+4>this.len)throw h(this,4);return 0|r(this.buf,this.pos+=4)},p.prototype.float=function(){if(this.pos+4>this.len)throw h(this,4);var d=c.float.readFloatLE(this.buf,this.pos);return this.pos+=4,d},p.prototype.double=function(){if(this.pos+8>this.len)throw h(this,4);var d=c.float.readDoubleLE(this.buf,this.pos);return this.pos+=8,d},p.prototype.bytes=function(){var d=this.uint32(),g=this.pos,m=this.pos+d;if(m>this.len)throw h(this,d);return this.pos+=d,Array.isArray(this.buf)?this.buf.slice(g,m):g===m?new this.buf.constructor(0):this._slice.call(this.buf,g,m)},p.prototype.string=function(){var d=this.bytes();return a.read(d,0,d.length)},p.prototype.skip=function(d){if(typeof d=="number"){if(this.pos+d>this.len)throw h(this,d);this.pos+=d}else do if(this.pos>=this.len)throw h(this);while(128&this.buf[this.pos++]);return this},p.prototype.skipType=function(d){switch(d){case 0:this.skip();break;case 1:this.skip(8);break;case 2:this.skip(this.uint32());break;case 3:for(;(d=7&this.uint32())!=4;)this.skipType(d);break;case 5:this.skip(4);break;default:throw Error("invalid wire type "+d+" at offset "+this.pos)}return this},p._configure=function(d){l=d,p.create=t(),l._configure();var g=c.Long?"toLong":"toNumber";c.merge(p.prototype,{int64:function(){return e.call(this)[g](!1)},uint64:function(){return e.call(this)[g](!0)},sint64:function(){return e.call(this).zzDecode()[g](!1)},fixed64:function(){return i.call(this)[g](!0)},sfixed64:function(){return i.call(this)[g](!1)}})}},593:(y,n,o)=>{y.exports=f;var l=o(1408);(f.prototype=Object.create(l.prototype)).constructor=f;var c=o(9693);function f(a){l.call(this,a)}f._configure=function(){c.Buffer&&(f.prototype._slice=c.Buffer.prototype.slice)},f.prototype.string=function(){var a=this.uint32();return this.buf.utf8Slice?this.buf.utf8Slice(this.pos,this.pos=Math.min(this.pos+a,this.len)):this.buf.toString("utf-8",this.pos,this.pos=Math.min(this.pos+a,this.len))},f._configure()},5054:y=>{y.exports={}},5994:(y,n,o)=>{n.Service=o(7948)},7948:(y,n,o)=>{y.exports=c;var l=o(9693);function c(f,a,h){if(typeof f!="function")throw TypeError("rpcImpl must be a function");l.EventEmitter.call(this),this.rpcImpl=f,this.requestDelimited=!!a,this.responseDelimited=!!h}(c.prototype=Object.create(l.EventEmitter.prototype)).constructor=c,c.prototype.rpcCall=function f(a,h,p,u,s){if(!u)throw TypeError("request must be specified");var t=this;if(!s)return l.asPromise(f,t,a,h,p,u);if(t.rpcImpl)try{return t.rpcImpl(a,h[t.requestDelimited?"encodeDelimited":"encode"](u).finish(),function(e,r){if(e)return t.emit("error",e,a),s(e);if(r!==null){if(!(r instanceof p))try{r=p[t.responseDelimited?"decodeDelimited":"decode"](r)}catch(i){return t.emit("error",i,a),s(i)}return t.emit("data",r,a),s(null,r)}t.end(!0)})}catch(e){return t.emit("error",e,a),void setTimeout(function(){s(e)},0)}else setTimeout(function(){s(Error("already ended"))},0)},c.prototype.end=function(f){return this.rpcImpl&&(f||this.rpcImpl(null,null,null),this.rpcImpl=null,this.emit("end").off()),this}},1945:(y,n,o)=>{y.exports=c;var l=o(9693);function c(p,u){this.lo=p>>>0,this.hi=u>>>0}var f=c.zero=new c(0,0);f.toNumber=function(){return 0},f.zzEncode=f.zzDecode=function(){return this},f.length=function(){return 1};var a=c.zeroHash="\0\0\0\0\0\0\0\0";c.fromNumber=function(p){if(p===0)return f;var u=p<0;u&&(p=-p);var s=p>>>0,t=(p-s)/4294967296>>>0;return u&&(t=~t>>>0,s=~s>>>0,++s>4294967295&&(s=0,++t>4294967295&&(t=0))),new c(s,t)},c.from=function(p){if(typeof p=="number")return c.fromNumber(p);if(l.isString(p)){if(!l.Long)return c.fromNumber(parseInt(p,10));p=l.Long.fromString(p)}return p.low||p.high?new c(p.low>>>0,p.high>>>0):f},c.prototype.toNumber=function(p){if(!p&&this.hi>>>31){var u=1+~this.lo>>>0,s=~this.hi>>>0;return u||(s=s+1>>>0),-(u+4294967296*s)}return this.lo+4294967296*this.hi},c.prototype.toLong=function(p){return l.Long?new l.Long(0|this.lo,0|this.hi,!!p):{low:0|this.lo,high:0|this.hi,unsigned:!!p}};var h=String.prototype.charCodeAt;c.fromHash=function(p){return p===a?f:new c((h.call(p,0)|h.call(p,1)<<8|h.call(p,2)<<16|h.call(p,3)<<24)>>>0,(h.call(p,4)|h.call(p,5)<<8|h.call(p,6)<<16|h.call(p,7)<<24)>>>0)},c.prototype.toHash=function(){return String.fromCharCode(255&this.lo,this.lo>>>8&255,this.lo>>>16&255,this.lo>>>24,255&this.hi,this.hi>>>8&255,this.hi>>>16&255,this.hi>>>24)},c.prototype.zzEncode=function(){var p=this.hi>>31;return this.hi=((this.hi<<1|this.lo>>>31)^p)>>>0,this.lo=(this.lo<<1^p)>>>0,this},c.prototype.zzDecode=function(){var p=-(1&this.lo);return this.lo=((this.lo>>>1|this.hi<<31)^p)>>>0,this.hi=(this.hi>>>1^p)>>>0,this},c.prototype.length=function(){var p=this.lo,u=(this.lo>>>28|this.hi<<4)>>>0,s=this.hi>>>24;return s===0?u===0?p<16384?p<128?1:2:p<2097152?3:4:u<16384?u<128?5:6:u<2097152?7:8:s<128?9:10}},9693:function(y,n,o){var l=n;function c(a,h,p){for(var u=Object.keys(h),s=0;s0)},l.Buffer=function(){try{var a=l.inquire("buffer").Buffer;return a.prototype.utf8Write?a:null}catch{return null}}(),l._Buffer_from=null,l._Buffer_allocUnsafe=null,l.newBuffer=function(a){return typeof a=="number"?l.Buffer?l._Buffer_allocUnsafe(a):new l.Array(a):l.Buffer?l._Buffer_from(a):typeof Uint8Array>"u"?a:new Uint8Array(a)},l.Array=typeof Uint8Array<"u"?Uint8Array:Array,l.Long=l.global.dcodeIO&&l.global.dcodeIO.Long||l.global.Long||l.inquire("long"),l.key2Re=/^true|false|0|1$/,l.key32Re=/^-?(?:0|[1-9][0-9]*)$/,l.key64Re=/^(?:[\\x00-\\xff]{8}|-?(?:0|[1-9][0-9]*))$/,l.longToHash=function(a){return a?l.LongBits.from(a).toHash():l.LongBits.zeroHash},l.longFromHash=function(a,h){var p=l.LongBits.fromHash(a);return l.Long?l.Long.fromBits(p.lo,p.hi,h):p.toNumber(!!h)},l.merge=c,l.lcFirst=function(a){return a.charAt(0).toLowerCase()+a.substring(1)},l.newError=f,l.ProtocolError=f("ProtocolError"),l.oneOfGetter=function(a){for(var h={},p=0;p-1;--s)if(h[u[s]]===1&&this[u[s]]!==void 0&&this[u[s]]!==null)return u[s]}},l.oneOfSetter=function(a){return function(h){for(var p=0;p{y.exports=t;var l,c=o(9693),f=c.LongBits,a=c.base64,h=c.utf8;function p(b,_,v){this.fn=b,this.len=_,this.next=void 0,this.val=v}function u(){}function s(b){this.head=b.head,this.tail=b.tail,this.len=b.len,this.next=b.states}function t(){this.len=0,this.head=new p(u,0,0),this.tail=this.head,this.states=null}var e=function(){return c.Buffer?function(){return(t.create=function(){return new l})()}:function(){return new t}};function r(b,_,v){_[v]=255&b}function i(b,_){this.len=b,this.next=void 0,this.val=_}function d(b,_,v){for(;b.hi;)_[v++]=127&b.lo|128,b.lo=(b.lo>>>7|b.hi<<25)>>>0,b.hi>>>=7;for(;b.lo>127;)_[v++]=127&b.lo|128,b.lo=b.lo>>>7;_[v++]=b.lo}function g(b,_,v){_[v]=255&b,_[v+1]=b>>>8&255,_[v+2]=b>>>16&255,_[v+3]=b>>>24}t.create=e(),t.alloc=function(b){return new c.Array(b)},c.Array!==Array&&(t.alloc=c.pool(t.alloc,c.Array.prototype.subarray)),t.prototype._push=function(b,_,v){return this.tail=this.tail.next=new p(b,_,v),this.len+=_,this},i.prototype=Object.create(p.prototype),i.prototype.fn=function(b,_,v){for(;b>127;)_[v++]=127&b|128,b>>>=7;_[v]=b},t.prototype.uint32=function(b){return this.len+=(this.tail=this.tail.next=new i((b>>>=0)<128?1:b<16384?2:b<2097152?3:b<268435456?4:5,b)).len,this},t.prototype.int32=function(b){return b<0?this._push(d,10,f.fromNumber(b)):this.uint32(b)},t.prototype.sint32=function(b){return this.uint32((b<<1^b>>31)>>>0)},t.prototype.uint64=function(b){var _=f.from(b);return this._push(d,_.length(),_)},t.prototype.int64=t.prototype.uint64,t.prototype.sint64=function(b){var _=f.from(b).zzEncode();return this._push(d,_.length(),_)},t.prototype.bool=function(b){return this._push(r,1,b?1:0)},t.prototype.fixed32=function(b){return this._push(g,4,b>>>0)},t.prototype.sfixed32=t.prototype.fixed32,t.prototype.fixed64=function(b){var _=f.from(b);return this._push(g,4,_.lo)._push(g,4,_.hi)},t.prototype.sfixed64=t.prototype.fixed64,t.prototype.float=function(b){return this._push(c.float.writeFloatLE,4,b)},t.prototype.double=function(b){return this._push(c.float.writeDoubleLE,8,b)};var m=c.Array.prototype.set?function(b,_,v){_.set(b,v)}:function(b,_,v){for(var w=0;w>>0;if(!_)return this._push(r,1,0);if(c.isString(b)){var v=t.alloc(_=a.length(b));a.decode(b,v,0),b=v}return this.uint32(_)._push(m,_,b)},t.prototype.string=function(b){var _=h.length(b);return _?this.uint32(_)._push(h.write,_,b):this._push(r,1,0)},t.prototype.fork=function(){return this.states=new s(this),this.head=this.tail=new p(u,0,0),this.len=0,this},t.prototype.reset=function(){return this.states?(this.head=this.states.head,this.tail=this.states.tail,this.len=this.states.len,this.states=this.states.next):(this.head=this.tail=new p(u,0,0),this.len=0),this},t.prototype.ldelim=function(){var b=this.head,_=this.tail,v=this.len;return this.reset().uint32(v),v&&(this.tail.next=b.next,this.tail=_,this.len+=v),this},t.prototype.finish=function(){for(var b=this.head.next,_=this.constructor.alloc(this.len),v=0;b;)b.fn(b.val,_,v),v+=b.len,b=b.next;return _},t._configure=function(b){l=b,t.create=e(),l._configure()}},3155:(y,n,o)=>{y.exports=f;var l=o(1173);(f.prototype=Object.create(l.prototype)).constructor=f;var c=o(9693);function f(){l.call(this)}function a(h,p,u){h.length<40?c.utf8.write(h,p,u):p.utf8Write?p.utf8Write(h,u):p.write(h,u)}f._configure=function(){f.alloc=c._Buffer_allocUnsafe,f.writeBytesBuffer=c.Buffer&&c.Buffer.prototype instanceof Uint8Array&&c.Buffer.prototype.set.name==="set"?function(h,p,u){p.set(h,u)}:function(h,p,u){if(h.copy)h.copy(p,u,0,h.length);else for(var s=0;s>>0;return this.uint32(p),p&&this._push(f.writeBytesBuffer,p,h),this},f.prototype.string=function(h){var p=c.Buffer.byteLength(h);return this.uint32(p),p&&this._push(a,p,h),this},f._configure()},7714:(y,n,o)=>{n.R=void 0;const l=o(6919),c=o(7448);n.R=new class{async init(){}async createSessionHandler(f,a){const h=new l.Session(a);return await h.loadModel(f),new c.OnnxjsSessionHandler(h)}}},4200:(y,n,o)=>{n.c8=n.rX=void 0;const l=o(1670),c=o(5381),f=o(2157),a=o(2306);n.rX=()=>{if((typeof l.env.wasm.initTimeout!="number"||l.env.wasm.initTimeout<0)&&(l.env.wasm.initTimeout=0),typeof l.env.wasm.simd!="boolean"&&(l.env.wasm.simd=!0),typeof l.env.wasm.proxy!="boolean"&&(l.env.wasm.proxy=!1),typeof l.env.wasm.numThreads!="number"||!Number.isInteger(l.env.wasm.numThreads)||l.env.wasm.numThreads<=0){const h=typeof navigator>"u"?(0,c.cpus)().length:navigator.hardwareConcurrency;l.env.wasm.numThreads=Math.min(4,Math.ceil((h||1)/2))}},n.c8=new class{async init(){(0,n.rX)(),await(0,f.initWasm)()}async createSessionHandler(h,p){const u=new a.OnnxruntimeWebAssemblySessionHandler;return await u.loadModel(h,p),Promise.resolve(u)}}},6018:function(y,n,o){var l=this&&this.__createBinding||(Object.create?function(a,h,p,u){u===void 0&&(u=p);var s=Object.getOwnPropertyDescriptor(h,p);s&&!("get"in s?!h.__esModule:s.writable||s.configurable)||(s={enumerable:!0,get:function(){return h[p]}}),Object.defineProperty(a,u,s)}:function(a,h,p,u){u===void 0&&(u=p),a[u]=h[p]}),c=this&&this.__exportStar||function(a,h){for(var p in a)p==="default"||Object.prototype.hasOwnProperty.call(h,p)||l(h,a,p)};Object.defineProperty(n,"__esModule",{value:!0}),c(o(1670),n);const f=o(1670);{const a=o(7714).R;(0,f.registerBackend)("webgl",a,-10)}{const a=o(4200).c8;(0,f.registerBackend)("cpu",a,10),(0,f.registerBackend)("wasm",a,10),(0,f.registerBackend)("xnnpack",a,9)}},246:(y,n)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.createAttributeWithCacheKey=void 0;class o{constructor(c){Object.assign(this,c)}get cacheKey(){return this._cacheKey||(this._cacheKey=Object.getOwnPropertyNames(this).sort().map(c=>`${this[c]}`).join(";")),this._cacheKey}}n.createAttributeWithCacheKey=l=>new o(l)},7778:(y,n,o)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.Attribute=void 0;const l=o(1446),c=o(9395),f=o(9162),a=o(2517);var h=c.onnxruntime.experimental.fbs;class p{constructor(s){if(this._attributes=new Map,s!=null){for(const t of s)t instanceof l.onnx.AttributeProto?this._attributes.set(t.name,[p.getValue(t),p.getType(t)]):t instanceof h.Attribute&&this._attributes.set(t.name(),[p.getValue(t),p.getType(t)]);if(this._attributes.sizef.Tensor.fromProto(r));if(s instanceof h.Attribute)return e.map(r=>f.Tensor.fromOrtTensor(r))}if(t===l.onnx.AttributeProto.AttributeType.STRING&&s instanceof l.onnx.AttributeProto){const r=e;return(0,a.decodeUtf8String)(r)}return t===l.onnx.AttributeProto.AttributeType.STRINGS&&s instanceof l.onnx.AttributeProto?e.map(a.decodeUtf8String):e}static getValueNoCheck(s){return s instanceof l.onnx.AttributeProto?this.getValueNoCheckFromOnnxFormat(s):this.getValueNoCheckFromOrtFormat(s)}static getValueNoCheckFromOnnxFormat(s){switch(s.type){case l.onnx.AttributeProto.AttributeType.FLOAT:return s.f;case l.onnx.AttributeProto.AttributeType.INT:return s.i;case l.onnx.AttributeProto.AttributeType.STRING:return s.s;case l.onnx.AttributeProto.AttributeType.TENSOR:return s.t;case l.onnx.AttributeProto.AttributeType.GRAPH:return s.g;case l.onnx.AttributeProto.AttributeType.FLOATS:return s.floats;case l.onnx.AttributeProto.AttributeType.INTS:return s.ints;case l.onnx.AttributeProto.AttributeType.STRINGS:return s.strings;case l.onnx.AttributeProto.AttributeType.TENSORS:return s.tensors;case l.onnx.AttributeProto.AttributeType.GRAPHS:return s.graphs;default:throw new Error(`unsupported attribute type: ${l.onnx.AttributeProto.AttributeType[s.type]}`)}}static getValueNoCheckFromOrtFormat(s){switch(s.type()){case h.AttributeType.FLOAT:return s.f();case h.AttributeType.INT:return s.i();case h.AttributeType.STRING:return s.s();case h.AttributeType.TENSOR:return s.t();case h.AttributeType.GRAPH:return s.g();case h.AttributeType.FLOATS:return s.floatsArray();case h.AttributeType.INTS:{const t=[];for(let e=0;e{Object.defineProperty(n,"__esModule",{value:!0}),n.resolveBackend=n.backend=void 0;const l=o(5038),c=new Map;async function f(a){const h=n.backend;if(h[a]!==void 0&&function(p){const u=p;return"initialize"in u&&typeof u.initialize=="function"&&"createSessionHandler"in u&&typeof u.createSessionHandler=="function"&&"dispose"in u&&typeof u.dispose=="function"}(h[a])){const p=h[a];let u=p.initialize();if(typeof u=="object"&&"then"in u&&(u=await u),u)return c.set(a,p),p}}n.backend={webgl:new l.WebGLBackend},n.resolveBackend=async function a(h){if(!h)return a(["webgl"]);{const p=typeof h=="string"?[h]:h;for(const u of p){const s=c.get(u);if(s)return s;const t=await f(u);if(t)return t}}throw new Error("no available backend to use")}},5038:(y,n,o)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.WebGLBackend=void 0;const l=o(1670),c=o(6231),f=o(6416),a=o(7305);n.WebGLBackend=class{get contextId(){return l.env.webgl.contextId}set contextId(h){l.env.webgl.contextId=h}get matmulMaxBatchSize(){return l.env.webgl.matmulMaxBatchSize}set matmulMaxBatchSize(h){l.env.webgl.matmulMaxBatchSize=h}get textureCacheMode(){return l.env.webgl.textureCacheMode}set textureCacheMode(h){l.env.webgl.textureCacheMode=h}get pack(){return l.env.webgl.pack}set pack(h){l.env.webgl.pack=h}get async(){return l.env.webgl.async}set async(h){l.env.webgl.async=h}initialize(){try{return this.glContext=(0,a.createWebGLContext)(this.contextId),typeof this.matmulMaxBatchSize!="number"&&(this.matmulMaxBatchSize=16),typeof this.textureCacheMode!="string"&&(this.textureCacheMode="full"),typeof this.pack!="boolean"&&(this.pack=!1),typeof this.async!="boolean"&&(this.async=!1),c.Logger.setWithEnv(l.env),c.Logger.verbose("WebGLBackend",`Created WebGLContext: ${typeof this.glContext} with matmulMaxBatchSize: ${this.matmulMaxBatchSize}; textureCacheMode: ${this.textureCacheMode}; pack: ${this.pack}; async: ${this.async}.`),!0}catch(h){return c.Logger.warning("WebGLBackend",`Unable to initialize WebGLBackend. ${h}`),!1}}createSessionHandler(h){return new f.WebGLSessionHandler(this,h)}dispose(){this.glContext.dispose()}}},5107:(y,n,o)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.CoordsGlslLib=void 0;const l=o(2517),c=o(8520),f=o(5060),a=o(7859),h=o(9390);class p extends c.GlslLib{constructor(s){super(s)}getFunctions(){return Object.assign(Object.assign(Object.assign(Object.assign(Object.assign(Object.assign(Object.assign({},this.offsetToCoords()),this.coordsToOffset()),this.toVec()),this.valueFrom()),this.getCommonUtilFuncs()),this.getInputsSamplingSnippets()),this.getOutputSamplingSnippet())}getCustomTypes(){return{}}offsetToCoords(){return{offsetToCoords:new c.GlslLibRoutine(`
- vec2 offsetToCoords(int offset, int width, int height) {
- int t = offset / width;
- int s = offset - t*width;
- vec2 coords = (vec2(s,t) + vec2(0.5,0.5)) / vec2(width, height);
- return coords;
- }
- `)}}coordsToOffset(){return{coordsToOffset:new c.GlslLibRoutine(`
- int coordsToOffset(vec2 coords, int width, int height) {
- float s = coords.s * float(width);
- float t = coords.t * float(height);
- int offset = int(t) * width + int(s);
- return offset;
- }
- `)}}getOutputSamplingSnippet(){const s=this.context.outputTextureLayout;return s.isPacked?this.getPackedOutputSamplingSnippet(s):this.getUnpackedOutputSamplingSnippet(s)}getPackedOutputSamplingSnippet(s){const t=s.unpackedShape,e=[s.width,s.height],r={},i="getOutputCoords";switch(t.length){case 0:r[i]=this.getOutputScalarCoords();break;case 1:r[i]=this.getOutputPacked1DCoords(t,e);break;case 2:r[i]=this.getOutputPacked2DCoords(t,e);break;case 3:r[i]=this.getOutputPacked3DCoords(t,e);break;default:r[i]=this.getOutputPackedNDCoords(t,e)}const d=`
- void setOutput(vec4 val) {
- ${(0,f.getGlsl)(this.context.glContext.version).output} = val;
- }
- `;return r.floatTextureSetRGBA=new c.GlslLibRoutine(d),r}getUnpackedOutputSamplingSnippet(s){const t=s.unpackedShape,e=[s.width,s.height],r={},i="getOutputCoords";switch(t.length){case 0:r[i]=this.getOutputScalarCoords();break;case 1:r[i]=this.getOutputUnpacked1DCoords(t,e);break;case 2:r[i]=this.getOutputUnpacked2DCoords(t,e);break;case 3:r[i]=this.getOutputUnpacked3DCoords(t,e);break;case 4:r[i]=this.getOutputUnpacked4DCoords(t,e);break;case 5:r[i]=this.getOutputUnpacked5DCoords(t,e);break;case 6:r[i]=this.getOutputUnpacked6DCoords(t,e);break;default:throw new Error(`Unsupported output dimensionality: ${t.length}`)}const d=`
- void setOutput(float val) {
- ${(0,f.getGlsl)(this.context.glContext.version).output} = vec4(val, 0, 0, 0);
- }
- `;return r.floatTextureSetR=new c.GlslLibRoutine(d),r}getOutputScalarCoords(){return new c.GlslLibRoutine(`
- int getOutputCoords() {
- return 0;
- }
- `)}getOutputPacked1DCoords(s,t){const e=t;let r="";return e[0]===1?(r=`
- int getOutputCoords() {
- return 2 * int(TexCoords.y * ${e[1]}.0);
- }
- `,new c.GlslLibRoutine(r)):e[1]===1?(r=`
- int getOutputCoords() {
- return 2 * int(TexCoords.x * ${e[0]}.0);
- }
- `,new c.GlslLibRoutine(r)):(r=`
- int getOutputCoords() {
- ivec2 resTexRC = ivec2(TexCoords.xy *
- vec2(${e[0]}, ${e[1]}));
- return 2 * (resTexRC.y * ${e[0]} + resTexRC.x);
- }
- `,new c.GlslLibRoutine(r))}getOutputPacked2DCoords(s,t){let e="";if(l.ArrayUtil.arraysEqual(s,t))return e=`
- ivec2 getOutputCoords() {
- return 2 * ivec2(TexCoords.xy * vec2(${t[0]}, ${t[1]}));
- }
- `,new c.GlslLibRoutine(e);const r=t,i=Math.ceil(s[1]/2);return e=`
- ivec2 getOutputCoords() {
- ivec2 resTexRC = ivec2(TexCoords.xy *
- vec2(${r[0]}, ${r[1]}));
-
- int index = resTexRC.y * ${r[0]} + resTexRC.x;
-
- // reverse r and c order for packed texture
- int r = imod(index, ${i}) * 2;
- int c = 2 * (index / ${i});
-
- return ivec2(r, c);
- }
- `,new c.GlslLibRoutine(e)}getOutputPacked3DCoords(s,t){const e=[t[0],t[1]],r=Math.ceil(s[2]/2),i=r*Math.ceil(s[1]/2),d=`
- ivec3 getOutputCoords() {
- ivec2 resTexRC = ivec2(TexCoords.xy *
- vec2(${e[0]}, ${e[1]}));
- int index = resTexRC.y * ${e[0]} + resTexRC.x;
-
- int b = index / ${i};
- index -= b * ${i};
-
- // reverse r and c order for packed texture
- int r = imod(index, ${r}) * 2;
- int c = 2 * (index / ${r});
-
- return ivec3(b, r, c);
- }
- `;return new c.GlslLibRoutine(d)}getOutputPackedNDCoords(s,t){const e=[t[0],t[1]],r=Math.ceil(s[s.length-1]/2),i=r*Math.ceil(s[s.length-2]/2);let d=i,g="",m="b, r, c";for(let _=2;_=0;--m)i[m]=i[m+1]*s[m+1];const d=["r","c","d"],g=i.map((m,b)=>`int ${d[b]} = index / ${m}; ${b===i.length-1?`int ${d[b+1]} = index - ${d[b]} * ${m}`:`index -= ${d[b]} * ${m}`};`).join("");return e=`
- ivec3 getOutputCoords() {
- ivec2 resTexRC = ivec2(TexCoords.xy *
- vec2(${t[0]}, ${t[1]}));
- int index = resTexRC.y * ${t[0]} + resTexRC.x;
- ${g}
- return ivec3(r, c, d);
- }
- `,new c.GlslLibRoutine(e)}getOutputUnpacked4DCoords(s,t){let e="";const r=s.length;let i=null;r<2&&(i=[]),i=new Array(r-1),i[r-2]=s[r-1];for(let m=r-3;m>=0;--m)i[m]=i[m+1]*s[m+1];const d=["r","c","d","d2"],g=i.map((m,b)=>`int ${d[b]} = index / ${m}; ${b===i.length-1?`int ${d[b+1]} = index - ${d[b]} * ${m}`:`index -= ${d[b]} * ${m}`};`).join("");return e=`
- ivec4 getOutputCoords() {
- ivec2 resTexRC = ivec2(TexCoords.xy *
- vec2(${t[0]}, ${t[1]}));
- int index = resTexRC.y * ${t[0]} + resTexRC.x;
- ${g}
- return ivec4(r, c, d, d2);
- }
- `,new c.GlslLibRoutine(e)}getOutputUnpacked5DCoords(s,t){let e="";const r=s.length;let i=null;r<2&&(i=[]),i=new Array(r-1),i[r-2]=s[r-1];for(let m=r-3;m>=0;--m)i[m]=i[m+1]*s[m+1];const d=["r","c","d","d2","d3"],g=i.map((m,b)=>`int ${d[b]} = index / ${m}; ${b===i.length-1?`int ${d[b+1]} = index - ${d[b]} * ${m}`:`index -= ${d[b]} * ${m}`};`).join("");return e=`
- ivec5 getOutputCoords() {
- ivec2 resTexRC = ivec2(TexCoords.xy *
- vec2(${t[0]}, ${t[1]}));
- int index = resTexRC.y * ${t[0]} + resTexRC.x;
- ${g}
- return ivec5(r, c, d, d2, d3);
- }
- `,new c.GlslLibRoutine(e)}getOutputUnpacked6DCoords(s,t){let e="";const r=s.length;let i=null;r<2&&(i=[]),i=new Array(r-1),i[r-2]=s[r-1];for(let m=r-3;m>=0;--m)i[m]=i[m+1]*s[m+1];const d=["r","c","d","d2","d3","d4"],g=i.map((m,b)=>`int ${d[b]} = index / ${m}; ${b===i.length-1?`int ${d[b+1]} = index - ${d[b]} * ${m}`:`index -= ${d[b]} * ${m}`};`).join("");return e=`
- ivec6 getOutputCoords() {
- ivec2 resTexRC = ivec2(TexCoords.xy *
- vec2(${t[0]}, ${t[1]}));
- int index = resTexRC.y * ${t[0]} + resTexRC.x;
- ${g}
- return ivec6(r, c, d, d2, d3, d4);
- }
- `,new c.GlslLibRoutine(e)}getCommonUtilFuncs(){const s={};let t="uvFromFlat";s[t]=new c.GlslLibRoutine(`
- vec2 uvFromFlat(int texNumR, int texNumC, int index) {
- int texC = index / texNumR;
- int texR = index - texC * texNumR;
- // TODO: swap texR, texC order in following function so row is corresponding to u and column is corresponding to
- // v.
- return (vec2(texR, texC) + halfCR) / vec2(texNumR, texNumC);
- }
- `),t="packedUVfrom1D",s[t]=new c.GlslLibRoutine(`
- vec2 packedUVfrom1D(int texNumR, int texNumC, int index) {
- int texelIndex = index / 2;
- int texR = texelIndex / texNumC;
- int texC = texelIndex - texR * texNumC;
- return (vec2(texC, texR) + halfCR) / vec2(texNumC, texNumR);
- }
- `),t="packedUVfrom2D",s[t]=new c.GlslLibRoutine(`
- vec2 packedUVfrom2D(int texNumR, int texNumC, int texelsInLogicalRow, int row, int col) {
- int texelIndex = (row / 2) * texelsInLogicalRow + (col / 2);
- int texR = texelIndex / texNumC;
- int texC = texelIndex - texR * texNumC;
- return (vec2(texC, texR) + halfCR) / vec2(texNumC, texNumR);
- }
- `),t="packedUVfrom3D",s[t]=new c.GlslLibRoutine(`
- vec2 packedUVfrom3D(int texNumR, int texNumC,
- int texelsInBatch, int texelsInLogicalRow, int b,
- int row, int col) {
- int index = b * texelsInBatch + (row / 2) * texelsInLogicalRow + (col / 2);
- int texR = index / texNumC;
- int texC = index - texR * texNumC;
- return (vec2(texC, texR) + halfCR) / vec2(texNumC, texNumR);
- }
- `),t="sampleTexture";const e=(0,f.getGlsl)(this.context.glContext.version);return s[t]=new c.GlslLibRoutine(`
- float sampleTexture(sampler2D textureSampler, vec2 uv) {
- return ${e.texture2D}(textureSampler, uv).r;
- }`),s}getInputsSamplingSnippets(){const s={},t=this.context.outputTextureLayout;return this.context.programInfo.inputNames.forEach((e,r)=>{const i=this.context.inputTextureLayouts[r],d=(0,h.generateShaderFuncNameFromInputSamplerName)(e);i.isPacked?s[d]=this.getPackedSamplerFromInput(d,e,i):s[d]=this.getUnpackedSamplerFromInput(d,e,i);const g=(0,h.generateShaderFuncNameFromInputSamplerNameAtOutCoords)(e);i.unpackedShape.length<=t.unpackedShape.length&&(i.isPacked?s[g]=this.getPackedSamplerAtOutputCoords(g,i,t,e):s[g]=this.getUnpackedSamplerAtOutputCoords(g,i,t,e))}),s}getPackedSamplerAtOutputCoords(s,t,e,r){const i=t.unpackedShape,d=e.unpackedShape,g=r,m=(0,h.generateShaderFuncNameFromInputSamplerName)(g),b=i.length,_=d.length,v=l.BroadcastUtil.getBroadcastDims(i,d),w=(0,h.getCoordsDataType)(_),S=_-b;let A;const O=(0,h.getGlChannels)();A=b===0?"":_<2&&v.length>=1?"coords = 0;":v.map(N=>`coords.${O[N+S]} = 0;`).join(`
-`);let x="";x=_<2&&b>0?"coords":i.map((N,H)=>`coords.${O[H+S]}`).join(", ");let I="return outputValue;";const $=l.ShapeUtil.size(i)===1,B=l.ShapeUtil.size(d)===1;if(b!==1||$||B){if($&&!B)I=_===1?`
- return vec4(outputValue.x, outputValue.x, 0., 0.);
- `:`
- return vec4(outputValue.x);
- `;else if(v.length){const N=b-2,H=b-1;v.indexOf(N)>-1&&v.indexOf(H)>-1?I="return vec4(outputValue.x);":v.indexOf(N)>-1?I="return vec4(outputValue.x, outputValue.y, outputValue.x, outputValue.y);":v.indexOf(H)>-1&&(I="return vec4(outputValue.xx, outputValue.zz);")}}else I=`
- return vec4(outputValue.xy, outputValue.xy);
- `;const L=`
- vec4 ${s}() {
- ${w} coords = getOutputCoords();
-
- int lastDim = coords.${O[_-1]};
- coords.${O[_-1]} = coords.${O[_-2]};
- coords.${O[_-2]} = lastDim;
-
- ${A}
- vec4 outputValue = ${m}(${x});
- ${I}
- }
- `;return new c.GlslLibRoutine(L,["coordinates.getOutputCoords"])}getUnpackedSamplerAtOutputCoords(s,t,e,r){const i=[e.width,e.height],d=[t.width,t.height],g=t.unpackedShape.length,m=e.unpackedShape.length,b=t.unpackedShape,_=e.unpackedShape,v=(0,h.generateShaderFuncNameFromInputSamplerName)(r);if(g===m&&l.ArrayUtil.arraysEqual(d,i)){const B=`
- float ${s}() {
- return sampleTexture(${r}, TexCoords);
- }
- `;return new c.GlslLibRoutine(B,["coordinates.sampleTexture"])}const w=(0,h.getCoordsDataType)(m),S=l.BroadcastUtil.getBroadcastDims(b,_),A=m-g;let O;const x=(0,h.getGlChannels)();O=g===0?"":m<2&&S.length>=1?"coords = 0;":S.map(B=>`coords.${x[B+A]} = 0;`).join(`
-`);let I="";I=m<2&&g>0?"coords":t.unpackedShape.map((B,L)=>`coords.${x[L+A]}`).join(", ");const $=`
- float ${s}() {
- ${w} coords = getOutputCoords();
- ${O}
- return ${v}(${I});
- }
- `;return new c.GlslLibRoutine($,["coordinates.getOutputCoords"])}getPackedSamplerFromInput(s,t,e){switch(e.unpackedShape.length){case 0:return this.getPackedSamplerScalar(s,t);case 1:return this.getPackedSampler1D(s,t,e);case 2:return this.getPackedSampler2D(s,t,e);case 3:return this.getPackedSampler3D(s,t,e);default:return this.getPackedSamplerND(s,t,e)}}getUnpackedSamplerFromInput(s,t,e){const r=e.unpackedShape;switch(r.length){case 0:return this.getUnpackedSamplerScalar(s,t,e);case 1:return this.getUnpackedSampler1D(s,t,e);case 2:return this.getUnpackedSampler2D(s,t,e);case 3:return this.getUnpackedSampler3D(s,t,e);case 4:return this.getUnpackedSampler4D(s,t,e);case 5:return this.getUnpackedSampler5D(s,t,e);case 6:return this.getUnpackedSampler6D(s,t,e);default:throw new Error(`Unsupported dimension ${r.length}-D`)}}getPackedSamplerScalar(s,t){const e=`
- vec4 ${s}() {
- return ${(0,f.getGlsl)(this.context.glContext.version).texture2D}(${t}, halfCR);
- }
- `;return new c.GlslLibRoutine(e)}getPackedSampler1D(s,t,e){const r=[e.width,e.height],i=[r[1],r[0]],d=(0,f.getGlsl)(this.context.glContext.version),g=`vec4 ${s}(int index) {
- vec2 uv = packedUVfrom1D(
- ${i[0]}, ${i[1]}, index);
- return ${d.texture2D}(${t}, uv);
- }`;return new c.GlslLibRoutine(g,["coordinates.packedUVfrom1D"])}getPackedSampler2D(s,t,e){const r=e.unpackedShape,i=[e.width,e.height],d=(0,f.getGlsl)(this.context.glContext.version),g=i[0],m=i[1];if(i!=null&&l.ArrayUtil.arraysEqual(r,i)){const w=`vec4 ${s}(int row, int col) {
- vec2 uv = (vec2(col, row) + halfCR) / vec2(${m}.0, ${g}.0);
- return ${d.texture2D}(${t}, uv);
- }`;return new c.GlslLibRoutine(w)}const b=i,_=Math.ceil(r[1]/2),v=`vec4 ${s}(int row, int col) {
- vec2 uv = packedUVfrom2D(${b[1]}, ${b[0]}, ${_}, row, col);
- return ${d.texture2D}(${t}, uv);
- }`;return new c.GlslLibRoutine(v,["coordinates.packedUVfrom2D"])}getPackedSampler3D(s,t,e){const r=e.unpackedShape,i=[e.width,e.height],d=[i[0],i[1]],g=(0,f.getGlsl)(this.context.glContext.version);if(r[0]===1){const w=r.slice(1),S=[1,2],A=(0,h.squeezeInputShape)(r,w),O=["b","row","col"],x=JSON.parse(JSON.stringify(e));x.unpackedShape=A;const I=this.getPackedSamplerFromInput(s,t,x),$=`${I.routineBody}
- vec4 ${s}(int b, int row, int col) {
- return ${s}(${(0,h.getSqueezedParams)(O,S)});
- } `;return new c.GlslLibRoutine($,I.dependencies)}const m=d[0],b=d[1],_=Math.ceil(r[2]/2),v=`vec4 ${s}(int b, int row, int col) {
- vec2 uv = packedUVfrom3D(
- ${b}, ${m}, ${_*Math.ceil(r[1]/2)}, ${_}, b, row, col);
- return ${g.texture2D}(${t}, uv);}`;return new c.GlslLibRoutine(v,["coordinates.packedUVfrom3D"])}getPackedSamplerND(s,t,e){const r=e.unpackedShape,i=r.length,d=[e.width,e.height],g=(0,f.getGlsl)(this.context.glContext.version),m=[d[0],d[1]],b=m[1],_=m[0],v=Math.ceil(r[i-1]/2);let w=v*Math.ceil(r[i-2]/2),S="int b, int row, int col",A=`b * ${w} + (row / 2) * ${v} + (col / 2)`;for(let x=2;x{const r=this.context.inputTextureLayouts[e],i=(r.unpackedShape.length>0?r.unpackedShape:r.shape).length;let d=`_${t}`;s[d]=new c.GlslLibRoutine(this.getValueFromSingle(t,i,r.width,r.height,!1),[`shapeUtils.indicesToOffset${d}`,"coordinates.offsetToCoords","fragcolor.getColorAsFloat"]),d+="_T",s[d]=new c.GlslLibRoutine(this.getValueFromSingle(t,i,r.width,r.height,!0),[`shapeUtils.indicesToOffset${d}`,"coordinates.offsetToCoords","fragcolor.getColorAsFloat"])}),s}getValueFromSingle(s,t,e,r,i){let d=`_${s}`;return i&&(d+="_T"),`
- float ${d}(int m[${t}]) {
- int offset = indicesToOffset${d}(m);
- vec2 coords = offsetToCoords(offset, ${e}, ${r});
- float value = getColorAsFloat(${(0,f.getGlsl)(this.context.glContext.version).texture2D}(${s}, coords));
- return value;
- }
- `}getPackedValueFrom(s,t,e,r,i){let d=`_${s}_Pack`;return i&&(d+="_T"),`
- vec4 ${d}(int m[${t}]) {
- int offset = indicesToOffset_${s}(m);
- vec2 coords = offsetToCoords(offset, ${e}, ${r});
- return ${(0,f.getGlsl)(this.context.glContext.version).texture2D}(${s}, coords);
- }
- `}}n.CoordsGlslLib=p},8520:(y,n)=>{var o;Object.defineProperty(n,"__esModule",{value:!0}),n.TopologicalSortGlslRoutines=n.GlslLibRoutineNode=n.GlslLibRoutine=n.GlslLib=n.GlslContext=n.FunctionType=void 0,(o=n.FunctionType||(n.FunctionType={}))[o.ValueBased=0]="ValueBased",o[o.Positional=1]="Positional",n.GlslContext=class{constructor(l,c,f,a){this.glContext=l,this.programInfo=c,this.inputTextureLayouts=f,this.outputTextureLayout=a}},n.GlslLib=class{constructor(l){this.context=l}},n.GlslLibRoutine=class{constructor(l,c){this.routineBody=l,this.dependencies=c}},n.GlslLibRoutineNode=class{constructor(l,c,f){this.name=l,this.dependencies=f||[],c&&(this.routineBody=c)}addDependency(l){l&&this.dependencies.push(l)}},n.TopologicalSortGlslRoutines=class{static returnOrderedNodes(l){if(!l||l.length===0)return[];if(l.length===1)return l;const c=new Set,f=new Set,a=new Array;return this.createOrderedNodes(l,c,f,a),a}static createOrderedNodes(l,c,f,a){for(let h=0;h0)for(let p=0;p{Object.defineProperty(n,"__esModule",{value:!0}),n.EncodingGlslLib=void 0;const l=o(8520);class c extends l.GlslLib{constructor(a){super(a)}getFunctions(){return Object.assign(Object.assign({},this.encodeFloat32()),this.decodeFloat32())}getCustomTypes(){return{}}encodeFloat32(){return{encode:new l.GlslLibRoutine(`highp vec4 encode(highp float f) {
- return vec4(f, 0.0, 0.0, 0.0);
- }
- `)}}decodeFloat32(){return{decode:new l.GlslLibRoutine(`highp float decode(highp vec4 rgba) {
- return rgba.r;
- }
- `)}}encodeUint8(){const a=c.isLittleEndian()?"rgba.rgba=rgba.abgr;":"";return{encode:new l.GlslLibRoutine(`
- highp vec4 encode(highp float f) {
- highp float F = abs(f);
- highp float Sign = step(0.0,-f);
- highp float Exponent = floor(log2(F));
- highp float Mantissa = (exp2(- Exponent) * F);
- Exponent = floor(log2(F) + 127.0) + floor(log2(Mantissa));
- highp vec4 rgba;
- rgba[0] = 128.0 * Sign + floor(Exponent*exp2(-1.0));
- rgba[1] = 128.0 * mod(Exponent,2.0) + mod(floor(Mantissa*128.0),128.0);
- rgba[2] = floor(mod(floor(Mantissa*exp2(23.0 -8.0)),exp2(8.0)));
- rgba[3] = floor(exp2(23.0)*mod(Mantissa,exp2(-15.0)));
- ${a}
- rgba = rgba / 255.0; // values need to be normalized to [0,1]
- return rgba;
- }
- `)}}decodeUint8(){const a=c.isLittleEndian()?"rgba.rgba=rgba.abgr;":"";return{decode:new l.GlslLibRoutine(`
- highp float decode(highp vec4 rgba) {
- rgba = rgba * 255.0; // values need to be de-normalized from [0,1] to [0,255]
- ${a}
- highp float Sign = 1.0 - step(128.0,rgba[0])*2.0;
- highp float Exponent = 2.0 * mod(rgba[0],128.0) + step(128.0,rgba[1]) - 127.0;
- highp float Mantissa = mod(rgba[1],128.0)*65536.0 + rgba[2]*256.0 +rgba[3] + float(0x800000);
- highp float Result = Sign * exp2(Exponent) * (Mantissa * exp2(-23.0 ));
- return Result;
- }
- `)}}static isLittleEndian(){const a=new ArrayBuffer(4),h=new Uint32Array(a),p=new Uint8Array(a);if(h[0]=3735928559,p[0]===239)return!0;if(p[0]===222)return!1;throw new Error("unknown endianness")}}n.EncodingGlslLib=c},9894:(y,n,o)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.FragColorGlslLib=void 0;const l=o(8520),c=o(5060);class f extends l.GlslLib{constructor(h){super(h)}getFunctions(){return Object.assign(Object.assign({},this.setFragColor()),this.getColorAsFloat())}getCustomTypes(){return{}}setFragColor(){const h=(0,c.getGlsl)(this.context.glContext.version);return{setFragColor:new l.GlslLibRoutine(`
- void setFragColor(float value) {
- ${h.output} = encode(value);
- }
- `,["encoding.encode"])}}getColorAsFloat(){return{getColorAsFloat:new l.GlslLibRoutine(`
- float getColorAsFloat(vec4 color) {
- return decode(color);
- }
- `,["encoding.decode"])}}}n.FragColorGlslLib=f},2848:(y,n)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.replaceInlines=void 0;const o=/@inline[\s\n\r]+(\w+)[\s\n\r]+([0-9a-zA-Z_]+)\s*\(([^)]*)\)\s*{(([^}]|[\n\r])*)}/gm;n.replaceInlines=function(l){const c={};let f;for(;(f=o.exec(l))!==null;){const a=f[3].split(",").map(h=>{const p=h.trim().split(" ");return p&&p.length===2?{type:p[0],name:p[1]}:null}).filter(h=>h!==null);c[f[2]]={params:a,body:f[4]}}for(const a in c){const h="(\\w+)?\\s+([_0-9a-zA-Z]+)\\s+=\\s+__FUNC__\\((.*)\\)\\s*;".replace("__FUNC__",a),p=new RegExp(h,"gm");for(;(f=p.exec(l))!==null;){const u=f[1],s=f[2],t=f[3].split(","),e=u?`${u} ${s};`:"";let r=c[a].body,i="";c[a].params.forEach((g,m)=>{g&&(i+=`${g.type} ${g.name} = ${t[m]};
-`)}),r=`${i}
- ${r}`,r=r.replace("return",`${s} = `);const d=`
- ${e}
- {
- ${r}
- }
- `;l=l.replace(f[0],d)}}return l.replace(o,"")}},8879:(y,n,o)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.GlslPreprocessor=void 0;const l=o(8520),c=o(2848),f=o(5483),a=o(5060);n.GlslPreprocessor=class{constructor(h,p,u,s){this.libs={},this.glslLibRoutineDependencyGraph={},this.context=new l.GlslContext(h,p,u,s),Object.keys(f.glslRegistry).forEach(e=>{const r=new f.glslRegistry[e](this.context);this.libs[e]=r});const t=this.glslLibRoutineDependencyGraph;for(const e in this.libs){const r=this.libs[e].getFunctions();for(const i in r){const d=e+"."+i;let g;t[d]?(g=t[d],g.routineBody=r[i].routineBody):(g=new l.GlslLibRoutineNode(d,r[i].routineBody),t[d]=g);const m=r[i].dependencies;if(m)for(let b=0;b{const s=u.split(".")[1];h.indexOf(s)!==-1&&p.push(this.glslLibRoutineDependencyGraph[u])}),l.TopologicalSortGlslRoutines.returnOrderedNodes(p)}getUniforms(h,p){const u=[];if(h)for(const s of h)u.push(`uniform sampler2D ${s};`);if(p)for(const s of p)u.push(`uniform ${s.type} ${s.name}${s.arrayLength?`[${s.arrayLength}]`:""};`);return u.join(`
-`)}}},5483:(y,n,o)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.glslRegistry=void 0;const l=o(5107),c=o(7341),f=o(9894),a=o(2655),h=o(3891);n.glslRegistry={encoding:c.EncodingGlslLib,fragcolor:f.FragColorGlslLib,vec:h.VecGlslLib,shapeUtils:a.ShapeUtilsGlslLib,coordinates:l.CoordsGlslLib}},2655:(y,n,o)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.ShapeUtilsGlslLib=void 0;const l=o(8520);class c extends l.GlslLib{constructor(a){super(a)}getFunctions(){return Object.assign(Object.assign(Object.assign(Object.assign(Object.assign({},this.bcastIndex()),this.bcastMatmulIndex()),this.offsetToIndices()),this.indicesToOffset()),this.incrementIndices())}getCustomTypes(){return{}}bcastIndex(){const a=this.context.outputTextureLayout.shape.length,h={};return this.context.programInfo.inputNames.forEach((p,u)=>{const s=this.context.inputTextureLayouts[u].unpackedShape;if(s.length<=a){const t=s.length,e=a-t,r=`bcastIndices_${p}`;let i="";for(let g=0;g{const s=this.context.inputTextureLayouts[u].shape;if(!(s.length<2||s.length>a)){const t=s.length,e=a-t,r=`bcastMatmulIndices_${p}`;let i="";for(let g=0;g{const u=this.context.inputTextureLayouts[p].shape,s=this.context.inputTextureLayouts[p].strides,t=u.length;let e=`indicesToOffset_${h}`;a[e]=new l.GlslLibRoutine(c.indexToOffsetSingle(e,t,s)),e=`indicesToOffset_${h}_T`,a[e]=new l.GlslLibRoutine(c.indexToOffsetSingle(e,t,s.slice().reverse()))}),a}static indexToOffsetSingle(a,h,p){let u="";for(let s=h-1;s>=0;--s)u+=`
- offset += indices[${s}] * ${p[s]};
- `;return`
- int ${a}(int indices[${h}]) {
- int offset = 0;
- ${u}
- return offset;
- }
- `}offsetToIndices(){const a={};return this.context.programInfo.inputNames.forEach((h,p)=>{const u=this.context.inputTextureLayouts[p].shape,s=this.context.inputTextureLayouts[p].strides,t=u.length;let e=`offsetToIndices_${h}`;a[e]=new l.GlslLibRoutine(c.offsetToIndicesSingle(e,t,s)),e=`offsetToIndices_${h}_T`,a[e]=new l.GlslLibRoutine(c.offsetToIndicesSingle(e,t,s.slice().reverse()))}),a}static offsetToIndicesSingle(a,h,p){const u=[];for(let s=0;s{const u=this.context.inputTextureLayouts[p].shape,s=u.length,t=`incrementIndices_${h}`;let e="";for(let i=0;i= 0; --i) {
- if(i > axis) continue;
- indices[i] += 1;
- if(indices[i] < shape[i]) {
- break;
- }
- indices[i] = 0;
- }
- }
- `;a[t]=new l.GlslLibRoutine(r)}),a}}n.ShapeUtilsGlslLib=c},5060:(y,n)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.getDefaultFragShaderMain=n.getFragShaderPreamble=n.getVertexShaderSource=n.getGlsl=void 0;const o={version:"",attribute:"attribute",varyingVertex:"varying",varyingFrag:"varying",texture2D:"texture2D",output:"gl_FragColor",outputDeclaration:""},l={version:"#version 300 es",attribute:"in",varyingVertex:"out",varyingFrag:"in",texture2D:"texture",output:"outputColor",outputDeclaration:"out vec4 outputColor;"};function c(f){return f===1?o:l}n.getGlsl=c,n.getVertexShaderSource=function(f){const a=c(f);return`${a.version}
- precision highp float;
- ${a.attribute} vec3 position;
- ${a.attribute} vec2 textureCoord;
-
- ${a.varyingVertex} vec2 TexCoords;
-
- void main()
- {
- gl_Position = vec4(position, 1.0);
- TexCoords = textureCoord;
- }`},n.getFragShaderPreamble=function(f){const a=c(f);return`${a.version}
- precision highp float;
- precision highp int;
- precision highp sampler2D;
- ${a.varyingFrag} vec2 TexCoords;
- ${a.outputDeclaration}
- const vec2 halfCR = vec2(0.5, 0.5);
-
- // Custom vector types to handle higher dimenalities.
- struct ivec5
- {
- int x;
- int y;
- int z;
- int w;
- int u;
- };
-
- struct ivec6
- {
- int x;
- int y;
- int z;
- int w;
- int u;
- int v;
- };
-
- int imod(int x, int y) {
- return x - y * (x / y);
- }
-
- `},n.getDefaultFragShaderMain=function(f,a){return`
- void main() {
- int indices[${a}];
- toVec(TexCoords, indices);
- vec4 result = vec4(process(indices));
- ${c(f).output} = result;
- }
- `}},3891:(y,n,o)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.VecGlslLib=void 0;const l=o(8520);class c extends l.GlslLib{constructor(a){super(a)}getCustomTypes(){return{}}getFunctions(){return Object.assign(Object.assign(Object.assign(Object.assign({},this.binaryVecFunctions()),this.copyVec()),this.setVecItem()),this.getVecItem())}binaryVecFunctions(){const a=this.context.outputTextureLayout.shape.length,h={add:"+=",sub:"-=",mul:"*=",div:"/="},p={};for(const u in h){const s=`${u}Vec`;let t="";for(let r=0;r{Object.defineProperty(n,"__esModule",{value:!0}),n.WebGLInferenceHandler=void 0;const l=o(6231),c=o(9162),f=o(2517),a=o(2403),h=o(7019),p=o(8710),u=o(5611),s=o(4057),t=o(2039);n.WebGLInferenceHandler=class{constructor(e){this.session=e,this.packedTextureDataCache=new Map,this.unpackedTextureDataCache=new Map}calculateTextureWidthAndHeight(e,r){return(0,s.calculateTextureWidthAndHeight)(this.session.layoutStrategy,e,r)}executeProgram(e,r){if(r.length{const S=w.map(O=>`${O.unpackedShape.join(",")};${O.width}x${O.height}`).join("_");let A=v.name;return v.cacheHint&&(A+="["+v.cacheHint+"]"),A+=":"+S,A})(e,i);let g=this.session.programManager.getArtifact(d);const m=g?g.programInfo:typeof e.get=="function"?e.get():e,b=(0,s.createTextureLayoutFromTextureType)(this.session.layoutStrategy,m.output.dims,m.output.textureType),_=this.createTextureData(b,m.output.type);return g||(g=this.session.programManager.build(m,i,_),this.session.programManager.setArtifact(d,g)),this.runProgram(g,i,_),_}run(e,r){return this.executeProgram(e,r).tensor}runProgram(e,r,i){for(let d=0;dthis.readTexture(m),async b=>this.readTextureAsync(m),void 0,g),texture:i});return this.setTextureData(m.tensor.dataId,m,e.isPacked),m}getTextureData(e,r=!1){return this.session.isInitializer(e)?this.session.getTextureData(e,r):r?this.packedTextureDataCache.get(e):this.unpackedTextureDataCache.get(e)}setTextureData(e,r,i=!1){this.session.isInitializer(e)?this.session.setTextureData(e,r,i):(i?this.packedTextureDataCache:this.unpackedTextureDataCache).set(e,r)}isTextureLayoutCached(e,r=!1){return!!this.getTextureData(e.dataId,r)}dispose(){this.session.textureManager.clearActiveTextures(),this.packedTextureDataCache.forEach(e=>this.session.textureManager.releaseTexture(e)),this.packedTextureDataCache=new Map,this.unpackedTextureDataCache.forEach(e=>this.session.textureManager.releaseTexture(e)),this.unpackedTextureDataCache=new Map}readTexture(e){return e.isPacked?this.readTexture(this.unpack(e)):this.session.backend.glContext.isFloat32DownloadSupported?this.session.textureManager.readTexture(e,e.tensor.type,e.channels):this.session.textureManager.readUint8TextureAsFloat((0,p.encodeAsUint8)(this,e))}async readTextureAsync(e){return e.isPacked?this.readTextureAsync(this.unpack(e)):this.session.backend.glContext.isFloat32DownloadSupported?this.session.textureManager.readTextureAsync(e,e.tensor.type,e.channels):this.session.textureManager.readUint8TextureAsFloat((0,p.encodeAsUint8)(this,e))}pack(e){return this.executeProgram((0,a.createPackProgramInfoLoader)(this,e.tensor),[e.tensor])}unpack(e){return this.executeProgram((0,u.createUnpackProgramInfoLoader)(this,e.tensor),[e.tensor])}}},1640:function(y,n,o){var l=this&&this.__createBinding||(Object.create?function(X,Q,ee,ue){ue===void 0&&(ue=ee);var Ae=Object.getOwnPropertyDescriptor(Q,ee);Ae&&!("get"in Ae?!Q.__esModule:Ae.writable||Ae.configurable)||(Ae={enumerable:!0,get:function(){return Q[ee]}}),Object.defineProperty(X,ue,Ae)}:function(X,Q,ee,ue){ue===void 0&&(ue=ee),X[ue]=Q[ee]}),c=this&&this.__setModuleDefault||(Object.create?function(X,Q){Object.defineProperty(X,"default",{enumerable:!0,value:Q})}:function(X,Q){X.default=Q}),f=this&&this.__importStar||function(X){if(X&&X.__esModule)return X;var Q={};if(X!=null)for(var ee in X)ee!=="default"&&Object.prototype.hasOwnProperty.call(X,ee)&&l(Q,X,ee);return c(Q,X),Q};Object.defineProperty(n,"__esModule",{value:!0}),n.WEBGL_OP_RESOLVE_RULES=void 0;const a=o(2898),h=f(o(7839)),p=o(4196),u=o(2069),s=o(8138),t=o(9663),e=o(5193),r=o(7992),i=o(1253),d=o(4776),g=o(6572),m=o(3346),b=o(5623),_=o(2870),v=o(2143),w=o(4939),S=o(718),A=o(2268),O=o(8117),x=o(2278),I=o(5524),$=o(5975),B=o(3933),L=o(6558),N=o(5723),H=o(3738),M=f(o(4909)),j=o(8428),Z=o(9793);n.WEBGL_OP_RESOLVE_RULES=[["Abs","","6+",M.abs],["Acos","","7+",M.acos],["Add","","7+",h.add],["And","","7+",h.and],["Asin","","7+",M.asin],["Atan","","7+",M.atan],["AveragePool","","7+",v.averagePool,v.parseAveragePoolAttributes],["BatchNormalization","","7+",a.batchNormalization,a.parseBatchNormalizationAttributes],["Cast","","6+",p.cast,p.parseCastAttributes],["Ceil","","6+",M.ceil],["Clip","","6-10",M.clip,M.parseClipAttributes],["Clip","","11+",M.clipV11],["Concat","","4+",u.concat,u.parseConcatAttributes],["Conv","","1+",s.conv,s.parseConvAttributes],["ConvTranspose","","1+",t.convTranspose,t.parseConvTransposeAttributes],["Cos","","7+",M.cos],["Div","","7+",h.div],["Dropout","","7+",M.identity],["DepthToSpace","","1+",e.depthToSpace,e.parseDepthToSpaceAttributes],["Equal","","7+",h.equal],["Elu","","6+",M.elu,M.parseEluAttributes],["Exp","","6+",M.exp],["Flatten","","1+",r.flatten,r.parseFlattenAttributes],["Floor","","6+",M.floor],["FusedConv","com.microsoft","1+",s.conv,s.parseConvAttributes],["Gather","","1+",i.gather,i.parseGatherAttributes],["Gemm","","7-10",d.gemm,d.parseGemmAttributesV7],["Gemm","","11+",d.gemm,d.parseGemmAttributesV11],["GlobalAveragePool","","1+",v.globalAveragePool,v.parseGlobalAveragePoolAttributes],["GlobalMaxPool","","1+",v.globalMaxPool],["Greater","","7+",h.greater],["Identity","","1+",M.identity],["ImageScaler","","1+",g.imageScaler,g.parseImageScalerAttributes],["InstanceNormalization","","6+",m.instanceNormalization,m.parseInstanceNormalizationAttributes],["LeakyRelu","","6+",M.leakyRelu,M.parseLeakyReluAttributes],["Less","","7+",h.less],["Log","","6+",M.log],["MatMul","","1+",b.matMul,b.parseMatMulAttributes],["MaxPool","","1+",v.maxPool,v.parseMaxPoolAttributes],["Mul","","7+",h.mul],["Neg","","6+",M.neg],["Not","","1+",M.not],["Or","","7+",h.or],["Pad","","2-10",_.padV2,_.parsePadAttributesV2],["Pad","","11+",_.padV11,_.parsePadAttributesV11],["Pow","","7+",h.pow],["PRelu","","7+",h.pRelu],["ReduceLogSum","","1+",w.reduceLogSum,w.parseReduceAttributes],["ReduceMax","","1+",w.reduceMax,w.parseReduceAttributes],["ReduceMean","","1+",w.reduceMean,w.parseReduceAttributes],["ReduceMin","","1+",w.reduceMin,w.parseReduceAttributes],["ReduceProd","","1+",w.reduceProd,w.parseReduceAttributes],["ReduceSum","","1-12",w.reduceSum,w.parseReduceAttributes],["ReduceSumSquare","","1+",w.reduceLogSumSquare,w.parseReduceAttributes],["Relu","","6+",M.relu],["Reshape","","5+",S.reshape],["Resize","","10",A.resize,A.parseResizeAttributesV10],["Resize","","11+",A.resize,A.parseResizeAttributesV11],["Shape","","1+",O.shape],["Sigmoid","","6+",M.sigmoid],["Sin","","7+",M.sin],["Slice","","10+",x.sliceV10],["Slice","","1-9",x.slice,x.parseSliceAttributes],["Softmax","","1-12",I.softmax,I.parseSoftmaxAttributes],["Softmax","","13+",I.softmaxV13,I.parseSoftmaxAttributesV13],["Split","","2-12",$.split,$.parseSplitAttributes],["Sqrt","","6+",M.sqrt],["Squeeze","","1-12",B.squeeze,B.parseSqueezeAttributes],["Squeeze","","13+",B.squeezeV13],["Sub","","7+",h.sub],["Sum","","6+",L.sum],["Tan","","7+",M.tan],["Tanh","","6+",M.tanh],["Tile","","6+",N.tile],["Transpose","","1+",H.transpose,H.parseTransposeAttributes],["Upsample","","7-8",Z.upsample,Z.parseUpsampleAttributesV7],["Upsample","","9",Z.upsample,Z.parseUpsampleAttributesV9],["Unsqueeze","","1-12",j.unsqueeze,j.parseUnsqueezeAttributes],["Unsqueeze","","13+",j.unsqueezeV13],["Xor","","7+",h.xor]]},2898:(y,n,o)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.parseBatchNormalizationAttributes=n.batchNormalization=void 0;const l=o(246),c=o(5060),f=o(2039),a={name:"BatchNormalization",inputNames:["A","Scale","B","Mean","Variance"],inputTypes:[f.TextureType.unpacked,f.TextureType.unpacked,f.TextureType.unpacked,f.TextureType.unpacked,f.TextureType.unpacked]};n.batchNormalization=(u,s,t)=>(p(s),[u.run(Object.assign(Object.assign({},a),{cacheHint:t.cacheKey,get:()=>h(u,s,t)}),s)]),n.parseBatchNormalizationAttributes=u=>{const s=u.attributes.getFloat("epsilon",1e-5),t=u.attributes.getFloat("momentum",.9),e=u.attributes.getInt("spatial",1);return(0,l.createAttributeWithCacheKey)({epsilon:s,momentum:t,spatial:e})};const h=(u,s,t)=>{const e=(0,c.getGlsl)(u.session.backend.glContext.version),r=s[0].dims.length,[i,d]=u.calculateTextureWidthAndHeight(s[1].dims,f.TextureType.unpacked),g=`
- float process(int[${r}] indices) {
- vec2 position = offsetToCoords(indices[1], ${i}, ${d});
- float scale = getColorAsFloat(${e.texture2D}(Scale, position));
- float mean = getColorAsFloat(${e.texture2D}(Mean, position));
- float variance = getColorAsFloat(${e.texture2D}(Variance, position));
- float b = getColorAsFloat(${e.texture2D}(B, position));
-
- return scale * ( (_A(indices) - mean) / sqrt(variance + float(${t.epsilon})) ) + b;
- }`;return Object.assign(Object.assign({},a),{output:{dims:s[0].dims,type:s[0].type,textureType:f.TextureType.unpacked},shaderSource:g})},p=u=>{if(!u||u.length!==5)throw new Error("BatchNormalization requires 5 inputs.");const s=u[0],t=u[1],e=u[2],r=u[3],i=u[4];if(s.dims.length<3||t.dims.length!==1||e.dims.length!==1||r.dims.length!==1||i.dims.length!==1)throw new Error("invalid input shape.");if(t.dims[0]!==s.dims[1]||e.dims[0]!==s.dims[1]||r.dims[0]!==s.dims[1]||i.dims[0]!==s.dims[1])throw new Error("invalid input shape.");if(s.type!=="float32"&&s.type!=="float64"||t.type!=="float32"&&t.type!=="float64"||e.type!=="float32"&&e.type!=="float64"||r.type!=="float32"&&r.type!=="float64"||i.type!=="float32"&&i.type!=="float64")throw new Error("invalid input tensor types.")}},7839:(y,n,o)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.xor=n.sub=n.pRelu=n.pow=n.or=n.mul=n.less=n.greater=n.equal=n.div=n.and=n.add=n.glslPRelu=n.glslPow=n.glslXor=n.glslOr=n.glslAnd=n.glslLess=n.glslGreater=n.glslEqual=n.glslSub=n.glslMul=n.glslDiv=n.glslAdd=void 0;const l=o(2517),c=o(8520),f=o(5060),a=o(2039);function h(){const w="add_";return{body:`
- float ${w}(float a, float b) {
- return a + b;
- }
- vec4 ${w}(vec4 v1, vec4 v2) {
- return v1 + v2;
- }
- `,name:w,type:c.FunctionType.ValueBased}}function p(){const w="div_";return{body:`
- float ${w}(float a, float b) {
- return a / b;
- }
- vec4 ${w}(vec4 v1, vec4 v2) {
- return v1 / v2;
- }
- `,name:w,type:c.FunctionType.ValueBased}}function u(){const w="mul_";return{body:`
- float ${w}(float a, float b) {
- return a * b;
- }
- vec4 ${w}(vec4 v1, vec4 v2) {
- return v1 * v2;
- }
- `,name:w,type:c.FunctionType.ValueBased}}function s(){const w="sub_";return{body:`
- float ${w}(float a, float b) {
- return a - b;
- }
- vec4 ${w}(vec4 v1, vec4 v2) {
- return v1 - v2;
- }
- `,name:w,type:c.FunctionType.ValueBased}}function t(){const w="equal_";return{body:`
- float ${w}(float a, float b) {
- return float(a == b);
- }
- vec4 ${w}(vec4 v1, vec4 v2) {
- return vec4(equal(v1, v2));
- }
- `,name:w,type:c.FunctionType.ValueBased}}function e(){const w="greater_";return{body:`
- float ${w}(float a, float b) {
- return float(a > b);
- }
- vec4 ${w}(vec4 v1, vec4 v2) {
- return vec4( v1.r > v2.r ,
- v1.g > v2.g,
- v1.b > v2.b,
- v1.a > v2.a );
- }
- `,name:w,type:c.FunctionType.ValueBased}}function r(){const w="less_";return{body:`
- float ${w}(float a, float b) {
- return float(a < b);
- }
- vec4 ${w}(vec4 v1, vec4 v2) {
- return vec4( v1.r < v2.r ,
- v1.g < v2.g,
- v1.b < v2.b,
- v1.a < v2.a );
- }
- `,name:w,type:c.FunctionType.ValueBased}}function i(){const w="and_";return{body:`
- float ${w}(float a, float b) {
- return float( bool(a) && bool(b) );
- }
- vec4 ${w}(vec4 v1, vec4 v2) {
- bvec4 b1 = bvec4(v1);
- bvec4 b2 = bvec4(v2);
- return vec4( b1.r && b2.r ,
- b1.g && b2.g,
- b1.b && b2.b,
- b1.a && b2.a );
- }
- `,name:w,type:c.FunctionType.ValueBased}}function d(){const w="or_";return{body:`
- float ${w}(float a, float b) {
- return float( bool(a) || bool(b) );
- }
- vec4 ${w}(vec4 v1, vec4 v2) {
- bvec4 b1 = bvec4(v1);
- bvec4 b2 = bvec4(v2);
- return vec4( b1.r || b2.r ,
- b1.g || b2.g,
- b1.b || b2.b,
- b1.a || b2.a );
- }
- `,name:w,type:c.FunctionType.ValueBased}}function g(){const w="xor_";return{body:`
- float ${w}(float a, float b) {
- return float( bool(a) ^^ bool(b) );
- }
- vec4 ${w}(vec4 v1, vec4 v2) {
- bvec4 b1 = bvec4(v1);
- bvec4 b2 = bvec4(v2);
- return vec4( b1.r ^^ b2.r ,
- b1.g ^^ b2.g,
- b1.b ^^ b2.b,
- b1.a ^^ b2.a );
- }
- `,name:w,type:c.FunctionType.ValueBased}}function m(){return function(w){const S=`${w}_`;return{body:`
- float ${S}(float a, float b) {
- return ${w}(a, b);
- }
- vec4 ${S}(vec4 v1, vec4 v2) {
- return ${w}(v1, v2);
- }
- `,name:S,type:c.FunctionType.ValueBased}}("pow")}function b(){const w="prelu_";return{body:`
- float ${w}(float a, float b) {
- return a < 0.0 ? a * b: a;
- }
- vec4 ${w}(vec4 v1, vec4 v2) {
- return vec4(
- v1.r < 0.0 ? v1.r * v2.r: v1.r,
- v1.g < 0.0 ? v1.g * v2.g: v1.g,
- v1.b < 0.0 ? v1.b * v2.b: v1.b,
- v1.a < 0.0 ? v1.a * v2.a: v1.a
- );
- }
- `,name:w,type:c.FunctionType.ValueBased}}n.glslAdd=h,n.glslDiv=p,n.glslMul=u,n.glslSub=s,n.glslEqual=t,n.glslGreater=e,n.glslLess=r,n.glslAnd=i,n.glslOr=d,n.glslXor=g,n.glslPow=m,n.glslPRelu=b;const _=(w,S,A,O=S[0].type,x)=>{const I=w.session.pack?a.TextureType.packed:a.TextureType.unpacked;return{name:A.name,inputNames:["A","B"],inputTypes:[I,I],cacheHint:x,get:()=>v(w,S,A,O)}},v=(w,S,A,O=S[0].type)=>{const x=w.session.pack?a.TextureType.packed:a.TextureType.unpacked,I=!l.ShapeUtil.areEqual(S[0].dims,S[1].dims);let $=S[0].dims;const B=w.session.pack;if(I){const H=l.BroadcastUtil.calcShape(S[0].dims,S[1].dims,!1);if(!H)throw new Error("Can't perform binary op on the given tensors");$=H;const M=$.length,j=S[0].dims.length!==0?S[0].dims.length:1,Z=S[1].dims.length!==0?S[1].dims.length:1,X=S[0].dims.length!==0?"bcastIndices_A(indices, aindices);":"aindices[0] = 0;",Q=S[1].dims.length!==0?"bcastIndices_B(indices, bindices);":"bindices[0] = 0;",ee=(0,f.getGlsl)(w.session.backend.glContext.version),ue=B?`
- ${A.body}
- void main() {
- vec4 a = getAAtOutCoords();
- vec4 b = getBAtOutCoords();
- vec4 result = ${A.name}(a, b);
- ${ee.output} = result;
- }`:`
- ${A.body}
- float process(int indices[${M}]) {
- int aindices[${j}];
- int bindices[${Z}];
- ${X}
- ${Q}
- return ${A.name}(_A(aindices), _B(bindices));
- }`;return{name:A.name,inputNames:["A","B"],inputTypes:[x,x],output:{dims:$,type:O,textureType:x},shaderSource:ue,hasMain:B}}const L=(0,f.getGlsl)(w.session.backend.glContext.version),N=`
- ${A.body}
- void main() {
- vec4 v1 = ${L.texture2D}(A, TexCoords);
- vec4 v2 = ${L.texture2D}(B, TexCoords);
- vec4 result = ${A.name}(v1, v2);
- ${L.output} = result;
- }
- `;return{name:A.name,inputNames:["A","B"],inputTypes:[x,x],output:{dims:S[0].dims,type:O,textureType:x},shaderSource:N,hasMain:!0}};n.add=(w,S)=>[w.run(_(w,S,h()),S)],n.and=(w,S)=>[w.run(_(w,S,i(),"bool"),S)],n.div=(w,S)=>[w.run(_(w,S,p()),S)],n.equal=(w,S)=>[w.run(_(w,S,t(),"bool"),S)],n.greater=(w,S)=>[w.run(_(w,S,e(),"bool"),S)],n.less=(w,S)=>[w.run(_(w,S,r(),"bool"),S)],n.mul=(w,S)=>[w.run(_(w,S,u()),S)],n.or=(w,S)=>[w.run(_(w,S,d(),"bool"),S)],n.pow=(w,S)=>[w.run(_(w,S,m()),S)],n.pRelu=(w,S)=>[w.run(_(w,S,b()),S)],n.sub=(w,S)=>[w.run(_(w,S,s()),S)],n.xor=(w,S)=>[w.run(_(w,S,g(),"bool"),S)]},4196:(y,n,o)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.parseCastAttributes=n.cast=void 0;const l=o(2517);n.cast=(f,a,h)=>(c(a),[f.cast(a[0],h)]),n.parseCastAttributes=f=>l.ProtoUtil.tensorDataTypeFromProto(f.attributes.getInt("to"));const c=f=>{if(!f||f.length!==1)throw new Error("Cast requires 1 input.");if(f[0].type==="string")throw new Error("Invalid input type.")}},1163:(y,n,o)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.createPackedConcatProgramInfoLoader=void 0;const l=o(5060),c=o(2039),f=o(9390),a=o(2827);n.createPackedConcatProgramInfoLoader=(p,u,s)=>{const t=(e=u.length,r=s.cacheKey,{name:"Concat (packed)",inputNames:Array.from({length:e},(i,d)=>`X${d}`),inputTypes:Array(e).fill(c.TextureType.packed),cacheHint:r});var e,r;return Object.assign(Object.assign({},t),{get:()=>((i,d,g,m)=>{const b=g[0].dims.slice();if(m>=b.length||m<-1*b.length)throw new Error("axis specified for concat doesn't match input dimensionality");m<0&&(m=b.length+m);const _=b.slice(0);for(let X=1;XX.dims),x=(0,f.getGlChannels)(v),I=new Array(O.length-1);I[0]=O[0][m];for(let X=1;X= ${I[X-1]}) {
- return getChannel(
- getX${X}(${h(x,$,Q)}),
- vec2(${h(B,$,Q)}));
- }`}const H=I.length,M=I[I.length-1];N+=`
- return getChannel(
- getX${H}(${h(x,$,M)}),
- vec2(${h(B,$,M)}));`;const j=(0,l.getGlsl)(i.session.backend.glContext.version),Z=`
- ${A}
- float getValue(${x.map(X=>"int "+X)}) {
- ${N}
- }
-
- void main() {
- ${S} coords = getOutputCoords();
- int lastDim = coords.${x[v-1]};
- coords.${x[v-1]} = coords.${x[v-2]};
- coords.${x[v-2]} = lastDim;
-
- vec4 result = vec4(getValue(${w}), 0., 0., 0.);
-
- ${w[v-1]} = ${w[v-1]} + 1;
- if (${w[v-1]} < ${_[v-1]}) {
- result.g = getValue(${w});
- }
-
- ${w[v-2]} = ${w[v-2]} + 1;
- if (${w[v-2]} < ${_[v-2]}) {
- result.a = getValue(${w});
- }
-
- ${w[v-1]} = ${w[v-1]} - 1;
- if (${w[v-2]} < ${_[v-2]} &&
- ${w[v-1]} < ${_[v-1]}) {
- result.b = getValue(${w});
- }
- ${j.output} = result;
- }
- `;return Object.assign(Object.assign({},d),{output:{dims:_,type:g[0].type,textureType:c.TextureType.packed},shaderSource:Z,hasMain:!0})})(p,t,u,s.axis)})};const h=(p,u,s)=>{const t=p.indexOf(u);return p.map((e,r)=>r===t?`${e} - ${s}`:e).join()}},2069:(y,n,o)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.parseConcatAttributes=n.concat=void 0;const l=o(246),c=o(2039),f=o(1163);n.concat=(e,r,i)=>(t(r),e.session.pack&&r[0].dims.length>1?[e.run((0,f.createPackedConcatProgramInfoLoader)(e,r,i),r)]:[e.run(a(e,r,i),r)]);const a=(e,r,i)=>{const d=(g=r.length,m=i.cacheKey,{name:"Concat",inputNames:Array.from({length:g},(b,_)=>`X${_}`),inputTypes:Array(g).fill(c.TextureType.unpacked),cacheHint:m});var g,m;return Object.assign(Object.assign({},d),{get:()=>((b,_,v,w)=>{const S=v[0].dims.slice();if(w>=S.length||w<-1*S.length)throw new Error("axis specified for concat doesn't match input dimensionality");w<0&&(w=S.length+w);const A=S.slice(0);for(let L=1;L`int getTextureWhereDataResides(int index) {
- ${e.map((r,i)=>`if(index<${r}) {return ${i};}
-`).join("")}
- }`,p=e=>h(e),u=(e,r)=>{const i=[`float fetchDataFromCorrectTexture(int textureIndex, int indices[${r}]) {`];for(let d=0;d{const r=["int getSizeInConcatAxisValueFromIndex(int index) {"];for(let i=0;i(0,l.createAttributeWithCacheKey)({axis:e.attributes.getInt("axis")});const t=e=>{if(!e||e.length<1)throw new Error("too few inputs");const r=e[0].type,i=e[0].dims.length;if(r==="string")throw new Error("string tensor is not supported yet");for(const d of e){if(d.type!==r)throw new Error("input tensors should be one type");if(d.dims.length!==i)throw new Error("input tensors should have the same shape")}}},4770:(y,n,o)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.createUnpackedGroupedConvProgramInfoLoader=void 0;const l=o(6231),c=o(5060),f=o(2039),a=o(8138),h=o(2823);n.createUnpackedGroupedConvProgramInfoLoader=(p,u,s)=>{const t=(e=u.length>2,r=s.cacheKey,{name:"GroupedConv",inputNames:e?["X","W","Bias"]:["X","W"],inputTypes:e?[f.TextureType.unpacked,f.TextureType.unpacked,f.TextureType.unpacked]:[f.TextureType.unpacked,f.TextureType.unpacked],cacheHint:r});var e,r;return Object.assign(Object.assign({},t),{get:()=>((i,d,g,m)=>{const b=d.length>2?"value += getBias(output_channel);":"",_=d[0].dims.slice(),v=d[1].dims.slice(),w=v[0]/m.group;l.Logger.verbose("GroupedConv",`autpPad:${m.autoPad}, dilations:${m.dilations}, group:${m.group}, kernelShape:${m.kernelShape}, pads:${m.pads}, strides:${m.strides}`);const S=(0,a.calculateOutputShape)(_,v,m.dilations,m.pads,m.strides),A=(0,c.getGlsl)(i.session.backend.glContext.version),{activationFunction:O,applyActivation:x}=(0,h.getActivationSnippet)(m),I=`
- const ivec2 strides = ivec2(${m.strides[0]}, ${m.strides[1]});
- const ivec2 pads = ivec2(${m.pads[0]}, ${m.pads[1]});
- ${O}
- void main() {
- ivec4 coords = getOutputCoords();
- int batch = coords.x;
- int output_channel = coords.y;
- ivec2 xRCCorner = coords.zw * strides - pads;
- int group_id = output_channel / ${w};
-
- float value = 0.0;
- for (int wInChannel = 0; wInChannel < ${v[1]}; wInChannel++) {
- int input_channel = group_id * ${v[1]} + wInChannel;
- for (int wHeight = 0; wHeight < ${v[2]}; wHeight++) {
- int xHeight = xRCCorner.x + wHeight * ${m.dilations[0]};
-
- if (xHeight < 0 || xHeight >= ${_[2]}) {
- continue;
- }
-
- for (int wWidth = 0; wWidth < ${v[3]}; wWidth++) {
- int xWidth = xRCCorner.y + wWidth * ${m.dilations[1]};
- if (xWidth < 0 || xWidth >= ${_[3]}) {
- continue;
- }
-
- float xVal = getX(batch, input_channel, xWidth, xHeight);
- float wVal = getW(output_channel, wInChannel, wWidth, wHeight);
- value += xVal*wVal;
- }
- }
- }
- ${b}
- ${x}
- ${A.output} = vec4(value, .0, .0, .0);
- }
-`;return Object.assign(Object.assign({},g),{output:{dims:S,type:d[0].type,textureType:f.TextureType.unpacked},shaderSource:I,hasMain:!0})})(p,u,t,s)})}},1386:(y,n,o)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.conv2DPacked=n.conv2DPackedPointwise=void 0;const l=o(8138),c=o(8555),f=o(708);n.conv2DPackedPointwise=(a,h,p)=>{const u=h[0].dims,s=h[1].dims,t=(0,l.calculateOutputShape)(u,s,p.dilations,p.pads,p.strides),e=a.reshapePacked(h[0],[u[1],u[2]*u[3]]),r=a.reshapePacked(h[1],[s[0],s[1]]),i=h.length>2?[r,e,h[2]]:[r,e],d=a.run((0,f.createPackedMatmulProgramInfoLoader)(a,i,p),i);return a.reshapePacked(d,t)},n.conv2DPacked=(a,h,p)=>{const u=h[0].dims,s=h[1].dims,t=(0,l.calculateOutputShape)(u,s,p.dilations,p.pads,p.strides),e=a.run((0,c.createPackedIm2ColProgramInfoLoader)(a,h[0],h[1],t,p),[h[0]]),r=a.reshapePacked(h[1],[s[0],s[1]*s[2]*s[3]]),i=h.length===3?[r,e,h[2]]:[r,e],d=a.run((0,f.createPackedMatmulProgramInfoLoader)(a,i,p),i);return a.reshapePacked(d,t)}},9663:(y,n,o)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.parseConvTransposeAttributes=n.convTranspose=void 0;const l=o(246),c=o(5060),f=o(2039),a=o(2823),h=(r,i,d,g,m,b)=>(r-1)*i+d+(g-1)*m+1-b,p=(r,i,d,g,m)=>{const b=Math.floor(r/2);i==="SAME_UPPER"?(d[g]=b,d[m]=r-b):i==="SAME_LOWER"&&(d[g]=r-b,d[m]=b)};n.convTranspose=(r,i,d)=>(e(i,d),u(r,i,d));const u=(r,i,d)=>{const g=t(d,i);return[s(r,i,g)]},s=(r,i,d)=>r.run(((g,m,b)=>{const _=(v=m.length>2,w=b.cacheKey,{name:"ConvTranspose",inputNames:v?["X","W","B"]:["X","W"],inputTypes:v?[f.TextureType.unpacked,f.TextureType.unpacked,f.TextureType.unpacked]:[f.TextureType.unpacked,f.TextureType.unpacked],cacheHint:w});var v,w;return Object.assign(Object.assign({},_),{get:()=>((S,A,O,x)=>{const I=A.length>2?"getB(output_channel)":"0.0",$=A[0].dims,B=A[1].dims,L=B[1],N=B[0]/x.group,H=[A[0].dims[0],A[1].dims[1]*x.group,...x.outputShape],M=(0,c.getGlsl)(S.session.backend.glContext.version),{activationFunction:j,applyActivation:Z}=(0,a.getActivationSnippet)(x),X=`
- const ivec2 strides = ivec2(${x.strides[0]}, ${x.strides[1]});
- const ivec2 pads = ivec2(${x.pads[0]}, ${x.pads[1]});
- ${j}
- void main() {
- ivec4 coords = getOutputCoords();
- int batch = coords.x;
- int output_channel = coords.y;
-
- ivec2 loc = coords.zw + pads;
-
- int group_id = output_channel / ${L};
- int wOutChannel = output_channel - group_id * ${L};
-
- float value = ${I};
- for (int inChannelOffset = 0; inChannelOffset < ${N}; inChannelOffset++) {
- int input_channel = group_id * ${N} + inChannelOffset;
- for (int wWOff = 0; wWOff < ${B[2]}; wWOff++) {
- for (int wHOff = 0; wHOff < ${B[3]}; wHOff++) {
- ivec2 wOff = ivec2(wWOff * ${x.dilations[0]}, wHOff * ${x.dilations[1]});
- ivec2 wLoc = loc - wOff;
- ivec2 wLocIn = wLoc / strides;
- if (
- wLocIn * strides == wLoc &&
- wLocIn.x >= 0 && wLocIn.x < ${$[2]} &&
- wLocIn.y >= 0 && wLocIn.y < ${$[3]}
- ) {
- float xVal = getX(batch, input_channel, wLocIn.y, wLocIn.x);
- float wVal = getW(input_channel, wOutChannel, wHOff, wWOff);
- value += xVal * wVal;
- }
- }
- }
- }
- ${Z}
- ${M.output} = vec4(value, .0, .0, .0);
- }
-`;return Object.assign(Object.assign({},O),{output:{dims:H,type:A[0].type,textureType:f.TextureType.unpacked},shaderSource:X,hasMain:!0})})(g,m,_,b)})})(r,i,d),i),t=(r,i)=>{const d=r.kernelShape.slice();if(r.kernelShape.length===0)for(let _=2;_{const $=_.length-2,B=I.length===0;for(let L=0;L<$;++L){const N=B?_[L+2]*O[L]:I[L],H=h(_[L+2],O[L],A[L],v[L],w[L],N);p(H,S,A,L,L+$),B&&I.push(O[L]*(_[L+2]-1)+x[L]+(v[L]-1)*w[L]+1-A[L]-A[L+$])}})(i[0].dims,d,r.dilations,r.autoPad,g,r.strides,r.outputPadding,m);const b=Object.assign({},r);return Object.assign(b,{kernelShape:d,pads:g,outputShape:m,cacheKey:r.cacheKey}),b};n.parseConvTransposeAttributes=r=>{const i=r.attributes,d=(0,a.parseInternalActivationAttributes)(i),g=i.getString("auto_pad","NOTSET"),m=i.getInts("dilations",[1,1]),b=i.getInt("group",1),_=i.getInts("kernel_shape",[]),v=i.getInts("output_padding",[0,0]),w=i.getInts("output_shape",[]),S=i.getInts("pads",[0,0,0,0]),A=i.getInts("strides",[1,1]);return(0,l.createAttributeWithCacheKey)(Object.assign({autoPad:g,dilations:m,group:b,kernelShape:_,outputPadding:v,outputShape:w,pads:S,strides:A},d))};const e=(r,i)=>{if(!r||r.length!==2&&r.length!==3)throw new Error("Conv requires 2 or 3 inputs");if(r[0].dims.length!==4||r[1].dims.length!==4)throw new Error("currently only support 2-dimensional conv");if(r[0].dims[1]!==r[1].dims[0])throw new Error("FILTER_IN_CHANNEL should be equal to DATA_CHANNEL");const d=r[1].dims[1]*i.group;if(r.length===3&&(r[2].dims.length!==1||r[2].dims[0]!==d))throw new Error("invalid bias");const g=r[0].dims.length-2;if(i.dilations.length!==g)throw new Error(`dilations should be ${g}D`);if(i.strides.length!==g)throw new Error(`strides should be ${g}D`);if(i.pads.length!==2*g)throw new Error(`pads should be ${2*g}D`);if(i.outputPadding.length!==g)throw new Error(`output_padding should be ${g}D`);if(i.kernelShape.length!==0&&i.kernelShape.length!==r[1].dims.length-2)throw new Error("invalid kernel shape");if(i.outputShape.length!==0&&i.outputShape.length!==r[0].dims.length-2)throw new Error("invalid output shape");if(r[0].type!=="float32"||r[1].type!=="float32")throw new Error("ConvTranspose input(X,W) should be float tensor");if(r.length===3&&r[2].type!=="float32")throw new Error("ConvTranspose input(bias) should be float tensor")}},8138:(y,n,o)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.parseConvAttributes=n.conv=n.calculateOutputShape=void 0;const l=o(246),c=o(2517),f=o(4770),a=o(1386),h=o(9828),p=o(2823),u=o(3248),s=o(5623);n.calculateOutputShape=(g,m,b,_,v)=>{const w=g[0],S=g.slice(2),A=S.length,O=m[0],x=m.slice(2).map(($,B)=>$+($-1)*(b[B]-1)),I=S.map(($,B)=>$+_[B]+_[B+A]).map(($,B)=>Math.floor(($-x[B]+v[B])/v[B]));return[w,O].concat(...I)},n.conv=(g,m,b)=>(d(m,b),t(g,m,b));const t=(g,m,b)=>{const _=i(b,m),v=g.session.pack,w=_.kernelShape[0]===1&&_.kernelShape[1]===1;return _.group>1?[g.run((0,f.createUnpackedGroupedConvProgramInfoLoader)(g,m,_),m)]:w&&v?[e(g,m,_)]:v&&m[0].dims.length===4&&m[0].dims[0]===1&&!w?[(0,a.conv2DPacked)(g,m,_)]:[r(g,m,_)]},e=(g,m,b)=>{const _=m[0].dims,v=m[1].dims,w=(0,n.calculateOutputShape)(_,v,b.dilations,b.pads,b.strides),S=g.reshapeUnpacked(m[0],[_[1],_[2]*_[3]]),A=g.reshapeUnpacked(m[1],[v[0],v[1]]),O=m.length>2?[A,S,m[2]]:[A,S],x=g.run((0,s.createMatmulProgramInfoLoader)(O,b),O);return g.reshapeUnpacked(x,w)},r=(g,m,b)=>{const _=m[0].dims,v=m[1].dims,w=(0,n.calculateOutputShape)(_,v,b.dilations,b.pads,b.strides),S=g.run((0,u.createIm2ColProgramInfoLoader)(g,m[0],m[1],w,b),[m[0]]),A=m.length===3?[S,m[1],m[2]]:[S,m[1]];return g.run((0,h.createDotProductProgramInfoLoader)(g,m,w,b),A)},i=(g,m)=>{const b=g.kernelShape.slice();if(g.kernelShape.length===0)for(let w=2;w{const m=g.attributes,b=(0,p.parseInternalActivationAttributes)(m),_=m.getString("auto_pad","NOTSET"),v=m.getInts("dilations",[1,1]),w=m.getInt("group",1),S=m.getInts("kernel_shape",[]),A=m.getInts("pads",[0,0,0,0]),O=m.getInts("strides",[1,1]);return(0,l.createAttributeWithCacheKey)(Object.assign({autoPad:_,dilations:v,group:w,kernelShape:S,pads:A,strides:O},b))};const d=(g,m)=>{if(!g||g.length!==2&&g.length!==3)throw new Error("Conv requires 2 or 3 inputs");if(g[0].dims.length!==4||g[1].dims.length!==4)throw new Error("currently only support 2-dimensional conv");if(g[0].dims[1]!==g[1].dims[1]*m.group)throw new Error("FILTER_IN_CHANNEL should be equal to DATA_CHANNEL");if(g.length===3&&(g[2].dims.length!==1||g[1].dims[0]!==g[2].dims[0]))throw new Error("invalid bias");const b=g[0].dims.length-2;if(m.dilations.length!==b)throw new Error(`dilations should be ${b}D`);if(m.strides.length!==b)throw new Error(`strides should be ${b}D`);if(m.pads.length!==2*b)throw new Error(`pads should be ${2*b}D`);if(m.kernelShape.length!==0&&m.kernelShape.length!==g[1].dims.length-2)throw new Error("invalid kernel shape");if(g[0].type!=="float32"||g[1].type!=="float32")throw new Error("Conv input(X,W) should be float tensor");if(g.length===3&&g[2].type!=="float32")throw new Error("Conv input(bias) should be float tensor")}},5193:(y,n,o)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.parseDepthToSpaceAttributes=n.depthToSpace=void 0;const l=o(3738);n.depthToSpace=(f,a,h)=>{c(a);const p=h.blocksize,u=p*p,s=h.mode==="DCR"?[0,3,4,1,5,2]:[0,1,4,2,5,3],t=h.mode==="DCR"?[a[0].dims[0],p,p,a[0].dims[1]/u,a[0].dims[2],a[0].dims[3]]:[a[0].dims[0],a[0].dims[1]/u,p,p,a[0].dims[2],a[0].dims[3]],e=f.reshapeUnpacked(a[0],t),r={perm:s,cacheKey:`${s}`},[i]=(0,l.transpose)(f,[e],r),d=[a[0].dims[0],a[0].dims[1]/u,a[0].dims[2]*p,a[0].dims[3]*p];return[f.reshapeUnpacked(i,d)]},n.parseDepthToSpaceAttributes=f=>{const a=f.attributes.getInt("blocksize");if(a<1)throw new Error(`blocksize must be >= 1, but got : ${a} for DepthToSpace`);const h=f.attributes.getString("mode","DCR");if(h!=="DCR"&&h!=="CRD")throw new Error(`unrecognized mode: ${h} for DepthToSpace`);return{mode:h,blocksize:a}};const c=f=>{if(f.length!==1)throw new Error(`DepthToSpace expect 1 inputs, but got ${f.length}`);if(f[0].type==="string"||f[0].dims.length!==4)throw new TypeError("DepthToSpace input should be a 4-D numeric tensor")}},9828:(y,n,o)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.createDotProductProgramInfoLoader=void 0;const l=o(2517),c=o(5060),f=o(2039),a=o(2823),h=o(3248);n.createDotProductProgramInfoLoader=(p,u,s,t)=>{const e=((r,i)=>({name:"ConvDotProduct",inputNames:r?["Im2Col","K","B"]:["Im2Col","K"],inputTypes:r?[f.TextureType.unpacked,f.TextureType.packedLastDimension,f.TextureType.unpacked]:[f.TextureType.unpacked,f.TextureType.packedLastDimension],cacheKey:i.activationCacheKey}))(u.length>2,t);return Object.assign(Object.assign({},e),{get:()=>((r,i,d,g,m)=>{const b=d[0].dims,_=d[1].dims,v=[_[0],Math.ceil(b[1]*_[2]*_[3]/4)],w=(0,h.calculateIm2ColDims)(b,_,g),[S,A]=r.calculateTextureWidthAndHeight(v,f.TextureType.packedLastDimension),O=l.ShapeUtil.computeStrides(w),[x,I]=r.calculateTextureWidthAndHeight(w,f.TextureType.packedLastDimension),$=g.length,B=d.length<3?"0.0":"_B(b)",L=Math.ceil(b[1]*_[2]*_[3]/4),{activationFunction:N,applyActivation:H}=(0,a.getActivationSnippet)(m),M=(0,c.getGlsl)(r.session.backend.glContext.version),j=`
-${N}
-float process(int indices[${$}]) {
- int b[1];
- b[0] = indices[1];
- int im2col[4];
- im2col[0] = indices[0];
- im2col[1] = indices[2];
- im2col[2] = indices[3];
- int im2colOffset = im2col[0] * ${O[0]} + im2col[1] * ${O[1]} + im2col[2] * ${O[2]};
- int kernelOffset = indices[1] * ${v[1]};
- float value = ${B};
- for (int i = 0; i < ${L}; ++i) {
- vec2 im2colCoords = offsetToCoords(im2colOffset, ${x}, ${I});
- vec2 kernelCoords = offsetToCoords(kernelOffset, ${S}, ${A});
- value += dot(${M.texture2D}(Im2Col, im2colCoords), ${M.texture2D}(K, kernelCoords));
- ++im2colOffset;
- ++kernelOffset;
- }
- ${H}
- return value;
-}`;return Object.assign(Object.assign({},i),{output:{dims:g,type:d[0].type,textureType:f.TextureType.unpacked},shaderSource:j})})(p,e,u,s,t)})}},7992:(y,n,o)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.parseFlattenAttributes=n.flatten=void 0;const l=o(2517);n.flatten=(f,a,h)=>{c(a,h);const p=l.ShapeUtil.flattenShape(a[0].dims,h);return[f.reshapeUnpacked(a[0],p)]},n.parseFlattenAttributes=f=>f.attributes.getInt("axis",1);const c=(f,a)=>{if(!f||f.length!==1)throw new Error("Flatten requires 1 input.");const h=f[0].dims.length;if(h===0)throw new Error("scalar tensor is not supported.");if(a<-h||a>h)throw new Error("Invalid axis");if(f[0].type==="string")throw new Error("string tensor is not supported.")}},2823:(y,n,o)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.parseInternalActivationAttributes=n.getActivationSnippet=void 0;const l=o(2517),c=o(4909);n.getActivationSnippet=function(f){let a;switch(f.activation){case"Relu":a=(0,c.glslRelu)();break;case"Sigmoid":a=(0,c.glslSigmoid)();break;case"Clip":a=(0,c.glslClip)(f.clipMin,f.clipMax);break;default:return{activationFunction:"",applyActivation:""}}const h=a.name;return{activationFunction:a.body,applyActivation:`value = ${h}_(value);`}},n.parseInternalActivationAttributes=f=>{const a=f.getString("activation","");if(a==="Clip"){const[h,p]=f.getFloats("activation_params",[l.MIN_CLIP,l.MAX_CLIP]);return{activation:a,clipMax:p,clipMin:h,activationCacheKey:`${a}:${h},${p}`}}return{activation:a,activationCacheKey:a}}},1253:(y,n,o)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.parseGatherAttributes=n.gather=void 0;const l=o(246),c=o(782),f=o(2517),a=o(2039);n.gather=(s,t,e)=>(u(t,e.axis),[s.run(p(s,t,e),t)]),n.parseGatherAttributes=s=>(0,l.createAttributeWithCacheKey)({axis:s.attributes.getInt("axis",0)});const h={name:"Gather",inputNames:["A","B"],inputTypes:[a.TextureType.unpacked,a.TextureType.unpacked]},p=(s,t,e)=>{const r=Object.assign(Object.assign({},h),{cacheHint:e.cacheKey});return Object.assign(Object.assign({},r),{get:()=>((i,d,g,m)=>{const b=g[0].dims.slice(),_=g[1].dims.slice(),v=new Array(b.length+_.length-1);m=f.ShapeUtil.normalizeAxis(m,b.length);const w=[];for(let A=0;A{if(!s||s.length!==2)throw new Error("Gather requires 2 inputs.");const e=s[0].dims.length;if(e<1)throw new Error("Invalid input shape.");if(t<-e||t>e-1)throw new Error("Invalid axis.");if(c.NUMBER_TYPES.indexOf(s[0].type)===-1)throw new Error("Invaid input type.");if(s[1].type!=="int32"&&s[1].type!=="int16")throw new Error("Invaid input type.")}},4776:(y,n,o)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.parseGemmAttributesV11=n.parseGemmAttributesV7=n.gemm=void 0;const l=o(246),c=o(2517),f=o(2039);n.gemm=(s,t,e)=>(u(t,e),[s.run(h(t,e),t)]);const a=(s,t)=>{const e=s.attributes.getInt("transA",0)!==0,r=s.attributes.getInt("transB",0)!==0,i=s.attributes.getFloat("alpha",1),d=s.attributes.getFloat("beta",1);return(0,l.createAttributeWithCacheKey)({transA:e,transB:r,alpha:i,beta:d,isOptionalC:t})};n.parseGemmAttributesV7=s=>a(s,!1),n.parseGemmAttributesV11=s=>a(s,!0);const h=(s,t)=>{const e={name:"Gemm",inputNames:s.length===3?["A","B","C"]:["A","B"],inputTypes:s.length===3?[f.TextureType.unpacked,f.TextureType.unpacked,f.TextureType.unpacked]:[f.TextureType.unpacked,f.TextureType.unpacked],key:t.cacheKey};return Object.assign(Object.assign({},e),{get:()=>p(e,s,t)})},p=(s,t,e)=>{const r=t[0].dims.slice(),i=t[1].dims.slice(),[d,g]=c.GemmUtil.getShapeOfGemmResult(r,e.transA,i,e.transB,t.length===3?t[2].dims:void 0),m=[d,g];if(!m)throw new Error("Can't use gemm on the given tensors");let b=r[r.length-1],_="";e.transA&&(b=r[0]),e.transA&&e.transB?_="value += _A_T(a) * _B_T(b);":e.transA&&!e.transB?_="value += _A_T(a) * _B(b);":!e.transA&&e.transB?_="value += _A(a) * _B_T(b);":e.transA||e.transB||(_="value += _A(a) * _B(b);");const v=m.length,w=`
- float process(int indices[${v}]) {
- int a[${v}];
- int b[${v}];
- ${t.length===3?`int c[${t[2].dims.length}];`:""}
-
- copyVec(indices, a);
- copyVec(indices, b);
- ${t.length===3?"bcastIndices_C(indices, c);":""}
-
- float value = 0.0;
- for (int k=0; k<${b}; ++k) {
- a[${v-1}] = k;
- b[${v-2}] = k;
- ${_}
- }
-
- value = value * alpha;
- ${t.length===3?"value += beta * _C(c);":""}
- return value;
- }`;return Object.assign(Object.assign({},s),{output:{dims:m,type:t[0].type,textureType:f.TextureType.unpacked},variables:[{name:"alpha",type:"float",data:e.alpha},{name:"beta",type:"float",data:e.beta}],shaderSource:w})},u=(s,t)=>{if(!s)throw new Error("Input is missing");if(t.isOptionalC&&(s.length<2||s.length>3))throw new Error("Invaid input shape.");if(!t.isOptionalC&&s.length!==3)throw new Error("Gemm requires 3 inputs");if(s.length===3&&s[2].dims.length!==1&&s[2].dims.length!==2)throw new Error("Invalid input shape of C");if(s[0].type!=="float32"&&s[0].type!=="float64"||s[1].type!=="float32"&&s[1].type!=="float64"||s.length===3&&s[2].type!=="float32"&&s[2].type!=="float64")throw new Error("Invalid input type.");if(s[0].type!==s[1].type||s.length===3&&s[0].type!==s[2].type)throw new Error("Input types are mismatched")}},8555:(y,n,o)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.createPackedIm2ColProgramInfoLoader=void 0;const l=o(5060),c=o(2039),f=o(2827);n.createPackedIm2ColProgramInfoLoader=(a,h,p,u,s)=>{const t=(e=s.cacheKey,{name:"Im2Col (packed)",inputNames:["A"],inputTypes:[c.TextureType.packed],cacheHint:e});var e;return Object.assign(Object.assign({},t),{get:()=>((r,i,d,g,m,b)=>{const _=d.dims,v=g.dims,w=m.length,S=[v[1]*v[2]*v[3],m[2]*m[3]],A=v[2]*v[3],O=(0,f.unpackFromChannel)(),x=(0,l.getGlsl)(r.session.backend.glContext.version);let I="";for(let B=0;B<=1;B++)for(let L=0;L<=1;L++)I+=`
- blockIndex = rc.x + ${L};
- pos = rc.y + ${B};
-
- if(blockIndex < ${S[1]} && pos < ${S[0]}) {
- offsetY = int(blockIndex / (${m[w-1]})) * ${b.strides[0]} -
- ${b.pads[0]};
- d0 = offsetY + ${b.dilations[0]} * (imod(pos, ${A}) / ${v[2]});
-
- if(d0 < ${_[2]} && d0 >= 0) {
- offsetX = imod(blockIndex, ${m[w-1]}) * ${b.strides[1]} -
- ${b.pads[1]};
- d1 = offsetX + ${b.dilations[1]} * imod(imod(pos, ${A}), ${v[2]});
-
- if(d1 < ${_[3]} && d1 >= 0) {
-
- ch = int(float(pos)/ ${A}.);
- innerDims = vec2(d0, d1);
- result[${2*B+L}] = getChannel(
- getA(0, ch, int(innerDims.x),
- int(innerDims.y)), innerDims);
- }
- }
- }
-
- `;const $=`
- ${O}
-
- void main() {
- ivec2 rc = getOutputCoords();
- vec4 result = vec4(0.0);
- int blockIndex, pos, offsetY, d0, offsetX, d1, ch;
- vec2 innerDims;
- ${I}
- ${x.output} = result;
- }
- `;return Object.assign(Object.assign({},i),{output:{dims:S,type:d.type,textureType:c.TextureType.packed},shaderSource:$,hasMain:!0})})(a,t,h,p,u,s)})}},3248:(y,n,o)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.calculateIm2ColDims=n.createIm2ColProgramInfoLoader=void 0;const l=o(2039);n.createIm2ColProgramInfoLoader=(c,f,a,h,p)=>{const u=(s=p.cacheKey,{name:"Im2Col",inputNames:["X"],inputTypes:[l.TextureType.unpacked],cacheHint:s});var s;return Object.assign(Object.assign({},u),{get:()=>((t,e,r,i,d,g)=>{const m=r.dims,b=i.dims,_=d.length,v=(0,n.calculateIm2ColDims)(m,b,d,4),w=`
- const int XC = ${m[1]};
- const int XH = ${m[2]};
- const int XW = ${m[3]};
- const int KH = ${g.kernelShape[0]};
- const int KW = ${g.kernelShape[1]};
- const int dilationH = ${g.dilations[0]};
- const int dilationW = ${g.dilations[1]};
- const int strideH = ${g.strides[0]};
- const int strideW = ${g.strides[1]};
- const int padH = ${g.pads[0]};
- const int padW = ${g.pads[1]};
- const int KHKW = KH*KW;
- const int XCKHKW = XC * KHKW;
- const int outputChannels = 4;
- vec4 process(int indices[${_}]) {
- int b = indices[0]; // batch size
- int oh = indices[1] * strideH - padH; //output height
- int ow = indices[2] * strideW - padW; //output width
- int p = indices[3] * outputChannels; //patch
- vec4 value = vec4(0.0);
- for(int i=0; i < outputChannels; ++i) {
- if(p < XCKHKW) {
- int patchC = p / KHKW;
- int patchH = (p - patchC*KHKW) / KW;
- int patchW = (p - patchC*KHKW) - patchH * KW;
- int xh2 = oh + patchH * dilationH;
- int xw2 = ow + patchW * dilationW;
- int x[${m.length}];
- x[0] = b;
- x[1] = patchC;
- x[2] = xh2;
- x[3] = xw2;
- if(xh2 >= 0 &&
- xh2 < XH &&
- xw2 >= 0 &&
- xw2 < XW) {
- value[i] = _X(x);
- }
- }
- ++p;
- }
- return value;
- }
- `;return Object.assign(Object.assign({},e),{output:{dims:v,type:r.type,textureType:l.TextureType.packedLastDimension},shaderSource:w})})(0,u,f,a,h,p)})},n.calculateIm2ColDims=(c,f,a,h=4)=>[a[0],a[2],a[3],Math.ceil(c[1]*f[2]*f[3]/h)]},6572:(y,n,o)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.parseImageScalerAttributes=n.imageScaler=void 0;const l=o(246),c=o(2039);n.imageScaler=(u,s,t)=>(p(s),[u.run(a(u,s,t),s)]),n.parseImageScalerAttributes=u=>{const s=u.attributes.getFloat("scale"),t=u.attributes.getFloats("bias");return(0,l.createAttributeWithCacheKey)({scale:s,bias:t})};const f={name:"ImageScaler",inputNames:["X"],inputTypes:[c.TextureType.unpacked]},a=(u,s,t)=>{const e=Object.assign(Object.assign({},f),{cacheHint:t.cacheKey});return Object.assign(Object.assign({},e),{get:()=>((r,i,d,g)=>{const m=d[0].dims.slice(),b=m.length,_=`
- ${h(g.bias.length)}
- float process(int indices[${b}]) {
- return _X(indices) * scale + getBias(bias, indices[1]);
- }`;return Object.assign(Object.assign({},i),{output:{dims:m,type:d[0].type,textureType:c.TextureType.unpacked},variables:[{name:"bias",type:"float",arrayLength:g.bias.length,data:g.bias},{name:"scale",type:"float",data:g.scale}],shaderSource:_})})(0,e,s,t)})},h=u=>{const s=[`float getBias(float bias[${u}], int channel) {`];for(let t=0;t{if(!u||u.length!==1)throw new Error("ImageScaler requires 1 input.");if(u[0].dims.length!==4)throw new Error("Invalid input shape.");if(u[0].type!=="float32"&&u[0].type!=="float64")throw new Error("Invalid input type.")}},3346:(y,n,o)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.parseInstanceNormalizationAttributes=n.instanceNormalization=void 0;const l=o(5060),c=o(2039);n.instanceNormalization=(s,t,e)=>{u(t);const r=s.run(a(t[0]),t);return[s.run(p(s,t[0],e,r.dims),[t[0],r,t[1],t[2]])]},n.parseInstanceNormalizationAttributes=s=>s.attributes.getFloat("epsilon",1e-5);const f={name:"InstanceNormalization_MeanAndVariance",inputNames:["X"],inputTypes:[c.TextureType.unpacked]},a=s=>Object.assign(Object.assign({},f),{get:()=>((t,e)=>{const r=e.dims.slice(),i=r[1],d=r[2]*r[3],g=[r[0],i],m=`
- vec4 process(int[2] indices) {
- vec4 v = vec4(0.0);
- int a[4];
- a[0] = indices[0];
- a[1] = indices[1];
- float temp = 0.0;
- for(int a2=0; a2<${r[2]}; a2++) {
- a[2] = a2;
- for(int a3=0; a3<${r[3]}; a3++) {
- a[3] = a3;
- float x = _X(a);
- temp += x;
- }
- }
- float mean = temp / float(${d});
- temp = 0.0;
- for(int a2=0; a2<${r[2]}; a2++) {
- a[2] = a2;
- for(int a3=0; a3<${r[3]}; a3++) {
- a[3] = a3;
- float x = _X(a);
- temp += (x - mean) * (x - mean);
- }
- }
- v.r = mean;
- v.g = temp / float(${d});
-
- return v;
- }`;return Object.assign(Object.assign({},t),{output:{dims:g,type:e.type,textureType:c.TextureType.packedLastDimension},shaderSource:m})})(f,s)}),h={name:"InstanceNormalization_ComputeOutput",inputNames:["X","MeanAndVariance","Scale","B"],inputTypes:[c.TextureType.unpacked,c.TextureType.packedLastDimension,c.TextureType.unpacked,c.TextureType.unpacked]},p=(s,t,e,r)=>{const i=Object.assign(Object.assign({},h),{cacheHint:`${e}`});return Object.assign(Object.assign({},i),{get:()=>((d,g,m,b,_)=>{const v=(0,l.getGlsl)(d.session.backend.glContext.version),[w,S]=d.calculateTextureWidthAndHeight(_,c.TextureType.packedLastDimension),[A,O]=[w/4,S],x=`
- vec4 get_MeanAndVariance(int[2] mv) {
- int offset = indicesToOffset_MeanAndVariance(mv);
- vec2 coords = offsetToCoords(offset, ${A}, ${O});
- return ${v.texture2D}(MeanAndVariance, coords);
- }
-
- float process(int[4] indices) {
- int mv[2];
- mv[0] = indices[0];
- mv[1] = indices[1];
- vec4 mean_and_variance = get_MeanAndVariance(mv);
- float mean = mean_and_variance.r;
- float variance = mean_and_variance.g;
-
- int sb[1];
- sb[0] = indices[1];
- float scale = _Scale(sb);
- float b = _B(sb);
-
- return scale * (_X(indices) - mean) / sqrt(variance + epsilon) + b;
- }`;return Object.assign(Object.assign({},g),{output:{dims:m.dims,type:m.type,textureType:c.TextureType.unpacked},variables:[{name:"epsilon",type:"float",data:b}],shaderSource:x})})(s,i,t,e,r)})},u=s=>{if(!s||s.length!==3)throw new Error("InstanceNormalization requires 3 inputs.");const t=s[0],e=s[1],r=s[2];if(t.dims.length<3||e.dims.length!==1||r.dims.length!==1)throw new Error("Invalid input shape.");if(e.dims[0]!==t.dims[1]||r.dims[0]!==t.dims[1])throw new Error("Input shapes are mismatched.");if(t.type!=="float32"&&t.type!=="float64"||e.type!=="float32"&&e.type!=="float64"||r.type!=="float32"&&r.type!=="float64")throw new Error("Invalid input type.");if(s[0].dims.length!==4)throw new Error("Only support 4-D input shape.")}},708:(y,n,o)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.createPackedMatmulProgramInfoLoader=void 0;const l=o(2517),c=o(5060),f=o(2039),a=o(9390),h=o(2823),p=o(5623);n.createPackedMatmulProgramInfoLoader=(u,s,t)=>{const e=(r=s.length>2,i=t.activationCacheKey,{name:"MatMul (packed)",inputNames:r?["A","B","Bias"]:["A","B"],inputTypes:r?[f.TextureType.packed,f.TextureType.packed,f.TextureType.packed]:[f.TextureType.packed,f.TextureType.packed],cacheHint:i});var r,i;return Object.assign(Object.assign({},e),{get:()=>((d,g,m,b)=>{const _=m.length>2,v=_?"value += getBiasForMatmul();":"",w=m[0].dims,S=m[1].dims,A=l.BroadcastUtil.calcShape(w,S,!0),O=!l.ShapeUtil.areEqual(m[0].dims,m[1].dims);if(!A)throw new Error("Can't use matmul on the given tensors");const x=w[w.length-1],I=Math.ceil(x/2),$=w.length,B=S.length,L=(0,c.getGlsl)(d.session.backend.glContext.version),N=(0,a.getCoordsDataType)(A.length),H=A.length,M=(0,a.getGlChannels)(),{activationFunction:j,applyActivation:Z}=(0,h.getActivationSnippet)(b),X=_?`${(0,p.getBiasForMatmul)(N,M,m[2].dims,A,!0)}`:"",Q=O?`${function(xe,oe,we,ye){let ke=[],Ne=[];const Te=we[0].dims,$e=we[1].dims,Ce=Te.length,Ee=$e.length,Oe=ye.length,Be=Oe-Ce,Ve=Oe-Ee;ke=Te.map((Ie,je)=>`coords.${oe[je+Be]}`),ke[Ce-1]="i*2",ke.join(", "),Ne=$e.map((Ie,je)=>`coords.${oe[je+Ve]}`),Ne[Ee-2]="i*2",Ne.join(", ");const Ge=l.BroadcastUtil.getBroadcastDims(Te,ye),Xe=l.BroadcastUtil.getBroadcastDims($e,ye),Ze=Ge.map(Ie=>`coords.${oe[Ie+Be]} = 0;`).join(`
-`),qe=Xe.map(Ie=>`coords.${oe[Ie+Ve]} = 0;`).join(`
-`),Ue=`int lastDim = coords.${oe[Oe-1]};
- coords.${oe[Oe-1]} = coords.${oe[Oe-2]};
- coords.${oe[Oe-2]} = lastDim;`;return`
-vec4 getAAtOutCoordsMatmul(int i) {
- ${xe} coords = getOutputCoords();
- ${Ue}
- ${Ze}
- vec4 outputValue = getA(${ke});
- return outputValue;
-}
-
-vec4 getBAtOutCoordsMatmul(int i) {
- ${xe} coords = getOutputCoords();
- ${Ue}
- ${qe}
- vec4 outputValue = getB(${Ne});
- return outputValue;
-}`}(N,M,m,A)}`:"",ee=O?"getAAtOutCoordsMatmul(i)":`getA(${function(xe,oe){let we="";for(let ye=0;ye{Object.defineProperty(n,"__esModule",{value:!0}),n.getBiasForMatmul=n.createMatmulProgramInfoLoader=n.parseMatMulAttributes=n.matMul=void 0;const l=o(2517),c=o(2039),f=o(9390),a=o(2823),h=o(708);function p(t,e){const r=(i=t.length>2,d=e.activationCacheKey,{name:"MatMul",inputNames:i?["A","B","Bias"]:["A","B"],inputTypes:i?[c.TextureType.unpacked,c.TextureType.unpacked,c.TextureType.unpacked]:[c.TextureType.unpacked,c.TextureType.unpacked],cacheHint:d});var i,d;return Object.assign(Object.assign({},r),{get:()=>function(g,m,b){const _=m[0].dims,v=m[1].dims,w=l.BroadcastUtil.calcShape(_,v,!0);if(!w)throw new Error("Can't use matmul on the given tensors");const S=(0,f.getCoordsDataType)(w.length),A=(0,f.getGlChannels)(),{activationFunction:O,applyActivation:x}=(0,a.getActivationSnippet)(b),I=m.length>2,$=I?"value += getBiasForMatmul();":"",B=I?`${s(S,A,m[2].dims,w,!1)}`:"",L=w.length,N=_.length,H=v.length,M=`
- ${O}
- ${B}
- float process(int indices[${L}]) {
- int a[${N}];
- int b[${H}];
- bcastMatmulIndices_A(indices, a);
- bcastMatmulIndices_B(indices, b);
-
- float value;
- for (int k=0; k<${_[_.length-1]}; ++k) {
- a[${N-1}] = k;
- b[${H-2}] = k;
- value += _A(a) * _B(b);
- }
- ${$}
- ${x}
- return value;
- }`;return Object.assign(Object.assign({},g),{output:{dims:w,type:m[0].type,textureType:c.TextureType.unpacked},shaderSource:M})}(r,t,e)})}n.matMul=(t,e,r)=>(u(e),t.session.pack?[t.run((0,h.createPackedMatmulProgramInfoLoader)(t,e,r),e)]:[t.run(p(e,r),e)]),n.parseMatMulAttributes=t=>(0,a.parseInternalActivationAttributes)(t.attributes),n.createMatmulProgramInfoLoader=p;const u=t=>{if(!t||t.length!==2)throw new Error("MatMul requires 2 inputs.");if(t[0].dims[t[0].dims.length-1]!==t[1].dims[t[1].dims.length-2])throw new Error("shared dimension does not match.");if(t[0].type!=="float32"&&t[0].type!=="float64"||t[1].type!=="float32"&&t[1].type!=="float64")throw new Error("inputs should be float type");if(t[0].type!==t[1].type)throw new Error("inputs types should match")};function s(t,e,r,i,d){let g="";const m=r.length,b=i.length,_=b-m;g=b<2&&m>0?"coords":r.map((S,A)=>`coords.${e[A+_]}`).join(", ");const v=l.BroadcastUtil.getBroadcastDims(r,i).map(S=>`coords.${e[S+_]} = 0;`).join(`
-`);let w="vec4(outputValue.xx, outputValue.yy)";return l.ShapeUtil.size(r)===1&&(w="vec4(outputValue.x)"),d?`
-vec4 getBiasForMatmul() {
- ${t} coords = getOutputCoords();
- ${v}
- vec4 outputValue = getBias(${g});
- return ${w};
-}`:`
-float getBiasForMatmul() {
- ${t} coords = getOutputCoords();
- ${v}
- return getBias(coords.x);
-}`}n.getBiasForMatmul=s},2403:(y,n,o)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.createPackProgramInfoLoader=void 0;const l=o(5060),c=o(2039),f=o(9390),a=o(2827),h={name:"pack",inputNames:["A"],inputTypes:[c.TextureType.unpackedReversed]};n.createPackProgramInfoLoader=(p,u)=>Object.assign(Object.assign({},h),{get:()=>((s,t)=>{const e=(0,l.getGlsl)(s.session.backend.glContext.version),r=t.dims,i=r.length,d=t.dims.length,g=(0,f.getCoordsDataType)(d),m=(0,a.getChannels)("rc",d),b=(_=d,v=m,w=r[r.length-2],S=r[r.length-1],_===0||_===1?"":`
- int r = ${v[_-2]};
- int c = ${v[_-1]};
- int rp1 = ${v[_-2]} + 1;
- int cp1 = ${v[_-1]} + 1;
- bool rEdge = rp1 >= ${S};
- bool cEdge = cp1 >= ${w};
- `);var _,v,w,S;let A;A=i===0?[1,1]:i===1?[r[0],1]:[r[d-1],r[d-2]];const O=function($,B,L){if($===0)return"false";if($===1)return`rc > ${B[0]}`;let N="";for(let H=$-2;H<$;H++)N+=`${L[H]} >= ${B[H-$+2]}`,H<$-1&&(N+="||");return N}(d,A,m),x=function($,B){const L=$.length;if(L===0)return"getA(), 0, 0, 0";if(L===1)return`getA(rc),
- rc + 1 >= ${$[0]} ? 0. : getA(rc + 1),
- 0, 0`;let N="";if(L>2)for(let H=0;H{Object.defineProperty(n,"__esModule",{value:!0}),n.unpackFromChannel=n.getChannels=n.getVecChannels=void 0;const l=o(9390);function c(f,a){return(0,l.getGlChannels)(a).map(h=>`${f}.${h}`)}n.getVecChannels=c,n.getChannels=function(f,a){return a===1?[f]:c(f,a)},n.unpackFromChannel=function(){return`
- float getChannel(vec4 frag, int dim) {
- int modCoord = imod(dim, 2);
- return modCoord == 0 ? frag.r : frag.g;
- }
-
- float getChannel(vec4 frag, vec2 innerDims) {
- vec2 modCoord = mod(innerDims, 2.);
- return modCoord.x == 0. ?
- (modCoord.y == 0. ? frag.r : frag.g) :
- (modCoord.y == 0. ? frag.b : frag.a);
- }
- `}},2870:(y,n,o)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.parsePadAttributesV11=n.padV11=n.parsePadAttributesV2=n.padV2=void 0;const l=o(246),c=o(2517),f=o(5060),a=o(2039),h={name:"Pad",inputNames:["A"],inputTypes:[a.TextureType.unpacked]};n.padV2=(g,m,b)=>(s(m),[g.run(Object.assign(Object.assign({},h),{cacheHint:b.cacheKey,get:()=>u(g,m[0],b)}),m)]),n.parsePadAttributesV2=g=>{const m=g.attributes.getString("mode","constant"),b=g.attributes.getFloat("value",0),_=g.attributes.getInts("pads");return(0,l.createAttributeWithCacheKey)({mode:m,value:b,pads:_})},n.padV11=(g,m,b)=>{t(m);const _=p(g,m,b);return(0,n.padV2)(g,[m[0]],_)},n.parsePadAttributesV11=g=>g.attributes.getString("mode","constant");const p=(g,m,b)=>{if(!g.session.isInitializer(m[1].dataId)||m.length>=3&&!g.session.isInitializer(m[2].dataId))throw new Error("dynamic pad attributes are not allowed");const _=Array.from(m[1].integerData),v=m.length>=3?m[2].floatData[0]:0;return(0,l.createAttributeWithCacheKey)({mode:b,pads:_,value:v})},u=(g,m,b)=>{const _=c.ShapeUtil.padShape(m.dims.slice(),b.pads),v=_.length,w=`
- ${e(g,m,b)}
- float process(int[${v}] indices) {
- return padA(indices);
- }`;return{name:"Pad",inputNames:["A"],inputTypes:[a.TextureType.unpacked],output:{dims:_,type:m.type,textureType:a.TextureType.unpacked},shaderSource:w}},s=g=>{if(!g||g.length!==1)throw new Error("Pad requires 1 input");if(g[0].type!=="float32"&&g[0].type!=="float64")throw new Error("Invalid input type.")},t=g=>{if(!g||g.length!==2&&g.length!==3)throw new Error("Pad requires 2 or 3 inputs");if(g[1].type!=="int32")throw new Error("Invalid input type.");if(g.length>=3&&g[2].type==="string")throw new Error("Invalid input type.")},e=(g,m,b)=>{const _=(0,f.getGlsl)(g.session.backend.glContext.version),[v,w]=g.calculateTextureWidthAndHeight(m.dims,a.TextureType.unpacked),S=c.ShapeUtil.computeStrides(m.dims);switch(b.mode){case"constant":return r(_,m.dims,S,v,w,b.pads,b.value);case"reflect":return i(_,m.dims,S,v,w,b.pads);case"edge":return d(_,m.dims,S,v,w,b.pads);default:throw new Error("Invalid mode")}},r=(g,m,b,_,v,w,S)=>{const A=m.length;let O="";for(let x=A-1;x>=0;--x)O+=`
- k = m[${x}] - ${w[x]};
- if (k < 0) return constant;
- if (k >= ${m[x]}) return constant;
- offset += k * ${b[x]};
- `;return`
- float padA(int m[${A}]) {
- const float constant = float(${S});
- int offset = 0;
- int k = 0;
- ${O}
- vec2 coords = offsetToCoords(offset, ${_}, ${v});
- float value = getColorAsFloat(${g.texture2D}(A, coords));
- return value;
- }
- `},i=(g,m,b,_,v,w)=>{const S=m.length;let A="";for(let O=S-1;O>=0;--O)A+=`
- k = m[${O}] - ${w[O]};
- if (k < 0) { k = -k; }
- {
- const int _2n_1 = ${2*(m[O]-1)};
- k = int( mod( float(k), float(_2n_1) ) ) ;
- if(k >= ${m[O]}) { k = _2n_1 - k; }
- }
- offset += k * ${b[O]};
- `;return`
- float padA(int m[${S}]) {
- int offset = 0;
- int k = 0;
- ${A}
- vec2 coords = offsetToCoords(offset, ${_}, ${v});
- float value = getColorAsFloat(${g.texture2D}(A, coords));
- return value;
- }
- `},d=(g,m,b,_,v,w)=>{const S=m.length;let A="";for(let O=S-1;O>=0;--O)A+=`
- k = m[${O}] - ${w[O]};
- if (k < 0) k = 0;
- if (k >= ${m[O]}) k = ${m[O]-1};
- offset += k * ${b[O]};
- `;return`
- float padA(int m[${S}]) {
- int offset = 0;
- int k = 0;
- ${A}
- vec2 coords = offsetToCoords(offset, ${_}, ${v});
- float value = getColorAsFloat(${g.texture2D}(A, coords));
- return value;
- }
- `}},2143:(y,n,o)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.globalMaxPool=n.parseMaxPoolAttributes=n.maxPool=n.parseGlobalAveragePoolAttributes=n.globalAveragePool=n.parseAveragePoolAttributes=n.averagePool=void 0;const l=o(246),c=o(2517),f=o(2039);n.averagePool=(d,g,m)=>{t(g);const b={name:"AveragePool",inputNames:["X"],inputTypes:[f.TextureType.unpacked],cacheHint:m.cacheKey};return[d.run(Object.assign(Object.assign({},b),{get:()=>a(g,b,!1,m)}),g)]},n.parseAveragePoolAttributes=d=>{const g=d.attributes.getString("auto_pad","NOTSET"),m=d.attributes.getInt("ceil_mode",0),b=d.attributes.getInt("count_include_pad",0)!==0,_=d.attributes.getInts("kernel_shape"),v=d.attributes.getInts("strides",[]),w=d.attributes.getInts("pads",[]);if(m!==0)throw new Error("using ceil() in shape computation is not yet supported for AveragePool");return(0,l.createAttributeWithCacheKey)({autoPad:g,ceilMode:m,countIncludePad:b,kernelShape:_,strides:v,pads:w})};const a=(d,g,m,b)=>{const[_,v]=p(d,b,m),w=c.ShapeUtil.size(_.kernelShape);let S="";_.countIncludePad?S+=`value /= float(${w});`:S+=`value /= float(${w} - pad);`;const A=`
- ${e(d[0].dims,_,"value += _X(x);",S,"0.0")}
- `;return Object.assign(Object.assign({},g),{output:{dims:v,type:d[0].type,textureType:f.TextureType.unpacked},shaderSource:A})};n.globalAveragePool=(d,g,m)=>{t(g);const b={name:"GlobalAveragePool",inputNames:["X"],inputTypes:[f.TextureType.unpacked],cacheHint:`${m.countIncludePad}`};return[d.run(Object.assign(Object.assign({},b),{get:()=>a(g,b,!0,m)}),g)]},n.parseGlobalAveragePoolAttributes=d=>{const g=d.attributes.getInt("count_include_pad",0)!==0;return(0,l.createAttributeWithCacheKey)({autoPad:"",ceilMode:0,countIncludePad:g,kernelShape:[],strides:[],pads:[]})},n.maxPool=(d,g,m)=>{t(g);const b={name:"MaxPool",inputNames:["X"],inputTypes:[f.TextureType.unpacked],cacheHint:m.cacheKey};return[d.run(Object.assign(Object.assign({},b),{get:()=>h(g,b,!1,m)}),g)]},n.parseMaxPoolAttributes=d=>{const g=d.attributes.getString("auto_pad","NOTSET"),m=d.attributes.getInt("ceil_mode",0),b=d.attributes.getInts("kernel_shape"),_=d.attributes.getInts("strides",[]),v=d.attributes.getInts("pads",[]),w=d.attributes.getInt("storage_order",0),S=d.attributes.getInts("dilations",[]);if(w!==0)throw new Error("column major storage order is not yet supported for MaxPool");if(m!==0)throw new Error("using ceil() in shape computation is not yet supported for MaxPool");return(0,l.createAttributeWithCacheKey)({autoPad:g,ceilMode:m,countIncludePad:!1,kernelShape:b,strides:_,pads:v,storageOrder:w,dilations:S})};const h=(d,g,m,b)=>{const[_,v]=p(d,b,m),w=`
- ${e(d[0].dims,_,`
- value = max(_X(x), value);
- `,"","-1e5")}
- `;return Object.assign(Object.assign({},g),{output:{dims:v,type:d[0].type,textureType:f.TextureType.unpacked},shaderSource:w})},p=(d,g,m)=>{const b=d[0].dims.slice(),_=Object.hasOwnProperty.call(g,"dilations"),v=g.kernelShape.slice(),w=g.strides.slice(),S=_?g.dilations.slice():[],A=g.pads.slice();c.PoolConvUtil.adjustPoolAttributes(m,b,v,w,S,A);const O=c.PoolConvUtil.computePoolOutputShape(m,b,w,S,v,A,g.autoPad),x=Object.assign({},g);return _?Object.assign(x,{kernelShape:v,strides:w,pads:A,dilations:S,cacheKey:g.cacheKey}):Object.assign(x,{kernelShape:v,strides:w,pads:A,cacheKey:g.cacheKey}),[x,O]},u={autoPad:"",ceilMode:0,countIncludePad:!1,kernelShape:[],strides:[],pads:[],storageOrder:0,dilations:[],cacheKey:""},s={name:"GlobalMaxPool",inputNames:["X"],inputTypes:[f.TextureType.unpacked]};n.globalMaxPool=(d,g)=>(t(g),[d.run(Object.assign(Object.assign({},s),{get:()=>h(g,s,!0,u)}),g)]);const t=d=>{if(!d||d.length!==1)throw new Error("Pool ops requires 1 input.");if(d[0].type!=="float32"&&d[0].type!=="float64")throw new Error("Invalid input type.")},e=(d,g,m,b,_)=>{const v=d.length;if(g.kernelShape.length<=2){const w=g.kernelShape[g.kernelShape.length-1],S=g.strides[g.strides.length-1],A=g.pads[g.pads.length/2-1],O=g.pads[g.pads.length-1],x=d[v-1];let I="",$="",B="";if(I=A+O!==0?`
- for (int i = 0; i < ${w}; i++) {
- x[${v} - 1] = indices[${v} - 1] * ${S} - ${A} + i;
- if (x[${v} - 1] < 0 || x[${v} - 1] >= ${x}) {
- pad++;
- continue;
- }
- ${m}
- }`:`
- for (int i = 0; i < ${w}; i++) {
- x[${v} - 1] = indices[${v} - 1] * ${S} - ${A} + i;
- ${m}
- }`,g.kernelShape.length===2){const L=g.kernelShape[g.kernelShape.length-2],N=g.strides[g.strides.length-2],H=g.pads[g.pads.length/2-2],M=g.pads[g.pads.length-2],j=d[v-2];$=H+M!==0?`
- for (int j = 0; j < ${L}; j++) {
- x[${v} - 2] = indices[${v} - 2] * ${N} - ${H} + j;
- if (x[${v} - 2] < 0 || x[${v} - 2] >= ${j}) {
- pad+= ${w};
- continue;
- }
- `:`
- for (int j = 0; j < ${L}; j++) {
- x[${v} - 2] = indices[${v} - 2] * ${N} - ${H} + j;
- `,B=`
- }
- `}return`
- float process(int indices[${v}]) {
- int x[${v}];
- copyVec(indices, x);
-
- float value = ${_};
- int pad = 0;
- ${$}
- ${I}
- ${B}
- ${b}
- return value;
- }
- `}{const w=c.ShapeUtil.size(g.kernelShape),S=c.ShapeUtil.computeStrides(g.kernelShape),A=S.length,O=g.pads.length,x=i(A),I=r(d,"inputDims"),$=r(g.pads,"pads"),B=r(S,"kernelStrides"),L=r(g.strides,"strides");let N="";return N=g.pads.reduce((H,M)=>H+M)?`
- if (x[j] >= inputDims[j] || x[j] < 0) {
- pad++;
- isPad = true;
- break;
- }
- }
- if (!isPad) {
- ${m}
- }`:`
- }
- ${m}
- `,`
- ${x}
- float process(int indices[${v}]) {
- int x[${v}];
- copyVec(indices, x);
- int offset[${A}];
- int pads[${O}];
- int inputDims[${v}];
- int kernelStrides[${A}];
- int strides[${A}];
- ${$}
- ${I}
- ${L}
- ${B}
-
- float value = ${_};
- int pad = 0;
- bool isPad = false;
- for (int i = 0; i < ${w}; i++) {
- offsetToIndices(i, kernelStrides, offset);
- isPad = false;
- for (int j = ${v} - ${A}; j < ${v}; j++) {
- x[j] = indices[j] * strides[j - ${v} + ${A}]
- + offset[j - ${v} + ${A}] - pads[j - 2];
- ${N}
- }
- ${b}
-
- return value;
- }
- `}},r=(d,g)=>{let m="";for(let b=0;b`
- void offsetToIndices(int offset, int[${d}] strides, out int[${d}] indices) {
- if (${d} == 0) {
- return;
- }
- for (int i = 0; i < ${d} - 1; ++i) {
- indices[i] = offset / strides[i];
- offset -= indices[i] * strides[i];
- }
- indices[${d} - 1] = offset;
- }`},4939:(y,n,o)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.reduceLogSumSquare=n.reduceLogSum=n.reduceProd=n.reduceMin=n.reduceMax=n.reduceMean=n.reduceSum=n.parseReduceAttributes=void 0;const l=o(246),c=o(782),f=o(2517),a=o(2039),h=(s,t,e,r,i)=>{u(t);const d={name:r,inputNames:["A"],inputTypes:[a.TextureType.unpacked]};return[s.run(Object.assign(Object.assign({},d),{cacheHint:e.cacheKey,get:()=>p(s,t,e,r,i,d)}),t)]};n.parseReduceAttributes=s=>{const t=s.attributes.getInts("axes",[]),e=s.attributes.getInt("keepdims",1)===1;return(0,l.createAttributeWithCacheKey)({axes:t,keepDims:e})};const p=(s,t,e,r,i,d)=>{const g=[],m=t[0].dims.length||1,b=[],_=f.ShapeUtil.normalizeAxes(e.axes,t[0].dims.length),v=i(t,_);let w=v[1];for(let A=0;A=0||_.length===0?(e.keepDims&&g.push(1),w=`
- for(int j${A} = 0; j${A} < ${t[0].dims[A]}; j${A}++) {
- inputIdx[${A}] = j${A};
- ${w}
- }`):(b.push(`inputIdx[${A}] = outputIdx[${g.length}];`),g.push(t[0].dims[A]));const S=`
- float process(int outputIdx[${g.length||1}]) {
- float value; // final result
- int inputIdx[${m}]; // addressing input data
- ${b.join(`
-`)}
- ${v[0]} // init ops for reduce max/min
- ${w}
- ${v[2]} // final computation for reduce mean
- return value;
- }`;return Object.assign(Object.assign({},d),{output:{dims:g,type:t[0].type,textureType:a.TextureType.unpacked},shaderSource:S})},u=s=>{if(!s||s.length!==1)throw new Error("Reduce op requires 1 input.");if(c.NUMBER_TYPES.indexOf(s[0].type)===-1)throw new Error("Invalid input type.")};n.reduceSum=(s,t,e)=>h(s,t,e,"ReduceSum",()=>["value = 0.0;","value += _A(inputIdx);",""]),n.reduceMean=(s,t,e)=>h(s,t,e,"ReduceMean",(r,i)=>{let d=1;for(let g=0;g=0||i.length===0)&&(d*=r[0].dims[g]);return["value = 0.0;","value += _A(inputIdx);",`value /= ${d}.;`]}),n.reduceMax=(s,t,e)=>h(s,t,e,"ReduceMax",(r,i)=>{const d=[];for(let g=0;g=0||i.length===0)&&d.push(`inputIdx[${g}] = 0;`);return[`${d.join(`
-`)}
-value = _A(inputIdx);`,"value = max(value, _A(inputIdx));",""]}),n.reduceMin=(s,t,e)=>h(s,t,e,"ReduceMin",(r,i)=>{const d=[];for(let g=0;g=0||i.length===0)&&d.push(`inputIdx[${g}] = 0;`);return[`${d.join(`
-`)}
-value = _A(inputIdx);`,"value = min(value, _A(inputIdx));",""]}),n.reduceProd=(s,t,e)=>h(s,t,e,"ReduceProd",()=>["value = 1.0;","value *= _A(inputIdx);",""]),n.reduceLogSum=(s,t,e)=>h(s,t,e,"ReduceLogSum",()=>["value = 0.0;","value += _A(inputIdx);","value = log(value);"]),n.reduceLogSumSquare=(s,t,e)=>h(s,t,e,"ReduceLogSumSquare",()=>["float t; value = 0.0;","t = _A(inputIdx); value += t * t;",""])},7019:(y,n,o)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.isReshapeCheap=n.processDims3D=n.createPackedReshape3DProgramInfoLoader=void 0;const l=o(2517),c=o(5060),f=o(2039),a=o(2827);n.createPackedReshape3DProgramInfoLoader=(h,p,u)=>{const s=(t=>({name:"Reshape (packed)",inputTypes:[f.TextureType.packed],inputNames:["A"],cacheHint:`${t}`}))(u);return Object.assign(Object.assign({},s),{get:()=>((t,e,r,i)=>{const d=e.dims,g=i;let m="";for(let v=0;v<4;v++){let w="";switch(v){case 0:w="outputCoords = rc;";break;case 1:w="outputCoords = ivec3(rc.x, rc.y+1, rc.z);";break;case 2:w="outputCoords = ivec3(rc.x, rc.y, rc.z+1);";break;case 3:w="outputCoords = ivec3(rc.x, rc.y+1, rc.z+1);";break;default:throw new Error}m+=`
- ${w}
- ${v>0?"if(outputCoords.y < rows && outputCoords.z < cols){":""}
- int flattenedIndex = getFlattenedIndex(outputCoords);
-
- ivec3 inputRC = inputCoordsFromReshapedOutCoords(flattenedIndex);
- vec2 innerDims = vec2(float(inputRC.y),float(inputRC.z));
-
- result[${v}] = getChannel(getA(inputRC.x, inputRC.y, inputRC.z), innerDims);
-
- ${v>0?"}":""}
- `}const b=(0,c.getGlsl)(t.session.backend.glContext.version),_=`
- ${function(v){const w=l.ShapeUtil.computeStrides(v),S=["b","r","c"],A="index";return`
- ivec3 inputCoordsFromReshapedOutCoords(int index) {
- ${w.map((O,x)=>`int ${S[x]} = ${A} / ${O}; ${x===w.length-1?`int ${S[x+1]} = ${A} - ${S[x]} * ${O}`:`index -= ${S[x]} * ${O}`};`).join("")}
- return ivec3(b, r, c);
- }
- `}(d)}
- ${function(v){const w=l.ShapeUtil.computeStrides(v);return`
- int getFlattenedIndex(ivec3 coords) {
- // reverse y, z order
- return coords.x * ${w[0]} + coords.z * ${w[1]} + coords.y;
- }
-`}(g)}
- ${(0,a.unpackFromChannel)()}
-
- void main() {
- ivec3 rc = getOutputCoords();
-
- vec4 result = vec4(0.0);
-
- ivec3 outputCoords;
- int rows = ${g[2]};
- int cols = ${g[1]};
-
- ${m}
- ${b.output} = result;
- }
- `;return Object.assign(Object.assign({},r),{output:{dims:g,type:e.type,textureType:f.TextureType.packed},shaderSource:_,hasMain:!0})})(h,p,s,u)})},n.processDims3D=function(h){if(h.length===0)return[1,1,1];let p=1;for(let u=0;u1?h[h.length-2]:1,h[h.length-1]]},n.isReshapeCheap=function(h,p){let u=!1;return u=h.length===0||p.length===0||(h.length<2||p.length<2?h[h.length-1]===p[p.length-1]:h[h.length-1]===p[p.length-1]&&h[h.length-2]===p[p.length-2]),u}},718:(y,n,o)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.reshape=void 0;const l=o(2517);n.reshape=(c,f)=>{const a=l.ShapeUtil.calculateReshapedDims(f[0].dims,f[1].integerData);return c.session.pack?[c.reshapePacked(f[0],a)]:[c.reshapeUnpacked(f[0],a)]}},2268:(y,n,o)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.parseResizeAttributesV11=n.parseResizeAttributesV10=n.resize=void 0;const l=o(5060),c=o(2039),f=o(9390),a=o(2827),h=o(9793),p={name:"Resize",inputNames:["A"],inputTypes:[c.TextureType.packed]};n.resize=(r,i,d)=>((0,h.validateInputs)(i,d),[r.run(Object.assign(Object.assign({},p),{cacheHint:d.cacheKey,get:()=>u(r,i,d)}),i)]),n.parseResizeAttributesV10=r=>(0,h.parseUpsampleAttributes)(r,10),n.parseResizeAttributesV11=r=>(0,h.parseUpsampleAttributes)(r,11);const u=(r,i,d)=>{const g=(0,l.getGlsl)(r.session.backend.glContext.version),[m,b]=s(i,d);if(m.every(N=>N===1)&&d.coordinateTransformMode!=="tf_crop_and_resize")return Object.assign(Object.assign({},p),{output:{dims:b,type:i[0].type,textureType:c.TextureType.packed},hasMain:!0,shaderSource:`void main() {
- vec4 v = ${g.texture2D}(X, TexCoords);
- ${g.output} = v;
- }`});const _=b.length;if(_<2)throw new Error(`output dimension should be at least 2, but got ${_}`);const v=b[_-2],w=b[_-1],S=i[0].dims;if(_!==S.length)throw new Error(`output dimension should match input ${S.length}, but got ${_}`);const A=S[_-2],O=S[_-1],x=m[_-2],I=m[_-1];let $="";if(d.mode!=="linear")throw new Error(`resize (packed) does not support mode: '${d.mode}'`);switch(d.coordinateTransformMode){case"asymmetric":$=`
- vec4 getSourceFracIndex(ivec4 coords) {
- return vec4(coords) / scaleWHWH;
- }
- `;break;case"half_pixel":$=`
- vec4 getSourceFracIndex(ivec4 coords) {
- return (vec4(coords) + 0.5) / scaleWHWH - 0.5;
- }
- `;break;case"pytorch_half_pixel":$=`
- vec4 getSourceFracIndex(ivec4 coords) {
- vec4 fcoords = vec4(coords);
- return vec4(
- ${w}.0 > 1.0 ? (fcoords.x + 0.5) / scaleWHWH.x - 0.5 : 0.0,
- ${v}.0 > 1.0 ? (fcoords.y + 0.5) / scaleWHWH.y - 0.5 : 0.0,
- ${w}.0 > 1.0 ? (fcoords.z + 0.5) / scaleWHWH.z - 0.5 : 0.0,
- ${v}.0 > 1.0 ? (fcoords.w + 0.5) / scaleWHWH.w - 0.5 : 0.0
- );
- }
- `;break;case"align_corners":$=`
- vec4 getSourceFracIndex(ivec4 coords) {
- vec4 resized = vec4(${w}.0 - 1.0, ${v}.0 - 1.0, ${w}.0 - 1.0,
- ${v}.0 - 1.0);
- vec4 original = vec4(${O}.0 - 1.0, ${A}.0 - 1.0, ${O}.0 - 1.0,
- ${A}.0 - 1.0);
- vec4 new_scale = original / resized;
- return vec4(coords) * new_scale;
- }
- `;break;default:throw new Error(`resize (packed) does not support coordinateTransformMode: '${d.coordinateTransformMode}'`)}const B=(0,f.getCoordsDataType)(_),L=`
- const vec2 inputWH = vec2(${A}.0, ${O}.0);
- const vec4 scaleWHWH = vec4(float(${x}), float(${I}), float(${x}), float(${I}));
- ${(0,a.unpackFromChannel)()}
- ${$}
- float getAValue(int x10, int r, int c, int d) {
- return getChannel(getA(x10, r, c, d), vec2(c, d));
- }
- void main() {
- ${B} rc = getOutputCoords();
-
- int batch = rc[0];
- int depth = rc[1];
-
- // retrieve the 4 coordinates that is used in the 4 packed output values.
- ivec4 coords = ivec4(rc.wz, rc.w + 1, rc.z + 1);
-
- // calculate the source index in fraction
- vec4 sourceFrac = getSourceFracIndex(coords);
-
- // get the lower and upper bound of the 4 values that will be packed into one texel.
- ivec4 x00 = ivec4(max(sourceFrac.xy, vec2(0.0)), min(inputWH - 1.0, ceil(sourceFrac.xy)));
- ivec4 x01 = ivec4(max(sourceFrac.xw, vec2(0.0)), min(inputWH - 1.0, ceil(sourceFrac.xw)));
- ivec4 x10 = ivec4(max(sourceFrac.zy, vec2(0.0)), min(inputWH - 1.0, ceil(sourceFrac.zy)));
- ivec4 x11 = ivec4(max(sourceFrac.zw, vec2(0.0)), min(inputWH - 1.0, ceil(sourceFrac.zw)));
-
- bool hasNextRow = rc.w < ${v-1};
- bool hasNextCol = rc.z < ${w-1};
-
- // pack x00, x01, x10, x11's top-left corner into one vec4 structure
- vec4 topLeft = vec4(
- getAValue(batch, depth, x00.x, x00.y),
- hasNextCol ? getAValue(batch, depth, x01.x, x01.y) : 0.0,
- hasNextRow ? getAValue(batch, depth, x10.x, x10.y) : 0.0,
- (hasNextRow && hasNextCol) ? getAValue(batch, depth, x11.x, x11.y) : 0.0);
-
- // pack x00, x01, x10, x11's top-right corner into one vec4 structure
- vec4 topRight = vec4(
- getAValue(batch, depth, x00.x, x00.w),
- hasNextCol ? getAValue(batch, depth, x01.x, x01.w) : 0.0,
- hasNextRow ? getAValue(batch, depth, x10.x, x10.w) : 0.0,
- (hasNextRow && hasNextCol) ? getAValue(batch, depth, x11.x, x11.w) : 0.0);
-
- // pack x00, x01, x10, x11's bottom-left corner into one vec4 structure
- vec4 bottomLeft = vec4(
- getAValue(batch, depth, x00.z, x00.y),
- hasNextCol ? getAValue(batch, depth, x01.z, x01.y) : 0.0,
- hasNextRow ? getAValue(batch, depth, x10.z, x10.y) : 0.0,
- (hasNextRow && hasNextCol) ? getAValue(batch, depth, x11.z, x11.y) : 0.0);
-
- // pack x00, x01, x10, x11's bottom-right corner into one vec4 structure
- vec4 bottomRight = vec4(
- getAValue(batch, depth, x00.z, x00.w),
- hasNextCol ? getAValue(batch, depth, x01.z, x01.w) : 0.0,
- hasNextRow ? getAValue(batch, depth, x10.z, x10.w) : 0.0,
- (hasNextRow && hasNextCol) ? getAValue(batch, depth, x11.z, x11.w) : 0.0);
-
- // calculate the interpolation fraction on u and v direction
- vec4 frac = vec4(sourceFrac) - floor(sourceFrac);
- vec4 clampFrac = clamp(frac, vec4(0.0), vec4(1.0));
-
- vec4 top = mix(topLeft, topRight, clampFrac.ywyw);
- vec4 bottom = mix(bottomLeft, bottomRight, clampFrac.ywyw);
- vec4 newValue = mix(top, bottom, clampFrac.xxzz);
-
- ${g.output} = vec4(newValue);
- }
- `;return Object.assign(Object.assign({},p),{output:{dims:b,type:i[0].type,textureType:c.TextureType.packed},hasMain:!0,shaderSource:L})},s=(r,i)=>{const d=r[0].dims;let g,m=i.scales;if(m.length===0){const _=r[i.scalesInputIdx];if(_&&_.size!==0){if(r[i.sizesInputIdx])throw new Error("Only one of scales or sizes must be provided as input.");m=t(_,i.mode,i.isResize)}else{const v=r[i.sizesInputIdx];if(!v||v.size===0)throw new Error("Either scales or sizes MUST be provided as input.");g=Array.from(v.integerData),m=e(g,d,i.mode,i.isResize)}}else if(r[i.sizesInputIdx])throw new Error("Only one of scales or sizes must be provided as input.");const b=g||d.map((_,v)=>Math.floor(_*m[v]));return[m,b]},t=(r,i,d)=>{const g=Array.from(r.floatData);return(0,h.scalesValidation)(g,i,d),g},e=(r,i,d,g)=>{const m=i.length,b=new Array(m);for(let _=0,v=m;_{Object.defineProperty(n,"__esModule",{value:!0}),n.shape=void 0;const l=o(9162);n.shape=(f,a)=>(c(a),[new l.Tensor([a[0].dims.length],"int32",void 0,void 0,new Int32Array(a[0].dims))]);const c=f=>{if(!f||f.length!==1)throw new Error("Shape requires 1 input.")}},2278:(y,n,o)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.sliceV10=n.parseSliceAttributes=n.slice=void 0;const l=o(246),c=o(782),f=o(2517),a=o(2039),h={name:"Slice",inputNames:["A"],inputTypes:[a.TextureType.unpacked]};n.slice=(e,r,i)=>(u(r),[e.run(Object.assign(Object.assign({},h),{cacheHint:i.cacheKey,get:()=>p(e,r[0],i)}),r)]),n.parseSliceAttributes=e=>{const r=e.attributes.getInts("starts"),i=e.attributes.getInts("ends"),d=e.attributes.getInts("axes",[]);return(0,l.createAttributeWithCacheKey)({starts:r,ends:i,axes:d})};const p=(e,r,i)=>{const d=i.axes.length===0?r.dims.slice(0).map((S,A)=>A):i.axes,g=f.ShapeUtil.normalizeAxes(d,r.dims.length),m=i.starts.map((S,A)=>S>r.dims[g[A]]-1?r.dims[g[A]]:f.ShapeUtil.normalizeAxis(S,r.dims[g[A]])),b=i.ends.map((S,A)=>S>r.dims[g[A]]-1?r.dims[g[A]]:f.ShapeUtil.normalizeAxis(S,r.dims[g[A]])),_=r.dims.slice(),v=[];for(let S=0;S0&&v.push(`outputIdx[${g[S]}] += ${m[S]};`);const w=`
- float process(int outputIdx[${_.length}]) {
- ${v.join(`
- `)}
- return _A(outputIdx);
- }`;return Object.assign(Object.assign({},h),{output:{dims:_,type:r.type,textureType:a.TextureType.unpacked},shaderSource:w})},u=e=>{if(!e||e.length!==1)throw new Error("Slice requires 1 input.");if(c.NUMBER_TYPES.indexOf(e[0].type)===-1)throw new Error("Invalid input type.")};n.sliceV10=(e,r)=>{t(r);const i=s(e,r);return[e.run(Object.assign(Object.assign({},h),{cacheHint:i.cacheKey,get:()=>p(e,r[0],i)}),[r[0]])]};const s=(e,r)=>{if(!e.session.isInitializer(r[1].dataId)||!e.session.isInitializer(r[2].dataId)||r.length>=4&&!e.session.isInitializer(r[3].dataId)||r.length>=5&&!e.session.isInitializer(r[4].dataId))throw new Error("dynamic slice attributes are not allowed");if(r.length>=5&&r[4].integerData.some(m=>m!==1))throw new Error("currently non-1 steps is not supported for Slice");const i=Array.from(r[1].integerData),d=Array.from(r[2].integerData),g=r.length>=4?Array.from(r[3].integerData):[];return{starts:i,ends:d,axes:g,cacheKey:`${g};${i};${d}`}},t=e=>{if(!e||e.length<3||e.length>5)throw new Error("Invalid input number.");if(e[1].type!=="int32"||e[1].dims.length!==1)throw new Error("Invalid input type.");if(e[2].type!=="int32"||e[2].dims.length!==1)throw new Error("Invalid input type.");if(e.length>=4&&(e[3].type!=="int32"||e[3].dims.length!==1))throw new Error("Invalid input type.");if(e.length>=5&&(e[4].type!=="int32"||e[4].dims.length!==1))throw new Error("Invalid input type.")}},5524:(y,n,o)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.softmaxV13=n.parseSoftmaxAttributesV13=n.parseSoftmaxAttributes=n.softmax=void 0;const l=o(246),c=o(2517),f=o(5060),a=o(2039),h=o(3738),p={name:"SoftmaxComputeMax",inputNames:["A"],inputTypes:[a.TextureType.unpacked]},u={name:"SoftmaxComputeScale",inputNames:["A","Max"],inputTypes:[a.TextureType.unpacked,a.TextureType.unpacked]},s={name:"SoftMax",inputNames:["A","Max","Norm"],inputTypes:[a.TextureType.unpacked,a.TextureType.unpacked,a.TextureType.unpacked]};n.softmax=(g,m,b)=>{d(m);const _=m[0].dims.slice(),v=c.ShapeUtil.normalizeAxis(b.axis,_.length),w=c.ShapeUtil.sizeToDimension(_,v),S=c.ShapeUtil.sizeFromDimension(_,v);return t(g,m,b,w,S)},n.parseSoftmaxAttributes=g=>(0,l.createAttributeWithCacheKey)({axis:g.attributes.getInt("axis",1)}),n.parseSoftmaxAttributesV13=g=>(0,l.createAttributeWithCacheKey)({axis:g.attributes.getInt("axis",-1)}),n.softmaxV13=(g,m,b)=>{d(m);const _=m[0].dims.slice(),v=c.ShapeUtil.normalizeAxis(b.axis,_.length),w=_.length,S=v!==w-1,A=[];let O,x=[],I=[];S&&(x=Array.from({length:w}).map((N,H)=>H),x[v]=w-1,x[w-1]=v,x.map(N=>A.push(_[N])),O=(0,l.createAttributeWithCacheKey)({perm:x}),I=(0,h.transpose)(g,m,O));const $=S?c.ShapeUtil.sizeToDimension(A,w-1):c.ShapeUtil.sizeToDimension(_,w-1),B=S?c.ShapeUtil.sizeFromDimension(A,w-1):c.ShapeUtil.sizeFromDimension(_,w-1),L=t(g,S?I:m,b,$,B);return S?(0,h.transpose)(g,L,O):L};const t=(g,m,b,_,v)=>{const w=e(g,m[0],_,v,[_]),S=g.run(Object.assign(Object.assign({},p),{cacheHint:b.cacheKey,get:()=>w}),m),A=r(g,m[0],_,v,w.output.dims,[_]),O=g.run(Object.assign(Object.assign({},u),{cacheHint:b.cacheKey,get:()=>A}),[m[0],S]),x=i(g,m[0],_,v,w.output.dims,A.output.dims);return[g.run(Object.assign(Object.assign({},s),{cacheHint:b.cacheKey,get:()=>x}),[m[0],S,O])]},e=(g,m,b,_,v)=>{const[w,S]=g.calculateTextureWidthAndHeight(m.dims,a.TextureType.unpacked),A=v.length;if(b<1||_<1)throw new Error("Logical row count N and feature count D must be greater than or equal to 1");if(v.length!==1)throw new Error("Dimensionality of the output should be 1");if(v[0]!==b)throw new Error("Shape of the output should be equal to logical row count");const O=(0,f.getGlsl)(g.session.backend.glContext.version),x=`
- float process(int[${A}] indices) {
- int logical_row_start_offset = indices[0] * ${_};
-
- float max = getColorAsFloat(${O.texture2D}(A, offsetToCoords(logical_row_start_offset, ${w},
- ${S} )));
- for(int i=1; i<${_}; ++i)
- {
- float current = getColorAsFloat(${O.texture2D}(A, offsetToCoords(logical_row_start_offset + i,
- ${w}, ${S})));
- if(current > max)
- max = current;
- }
-
- return max;
- }`;return Object.assign(Object.assign({},p),{output:{dims:v,type:m.type,textureType:a.TextureType.unpacked},shaderSource:x})},r=(g,m,b,_,v,w)=>{const[S,A]=g.calculateTextureWidthAndHeight(m.dims,a.TextureType.unpacked),O=w.length;if(b<1||_<1)throw new Error("Logical row count N and feature count D must be greater than or equal to 1");if(w.length!==1)throw new Error("Dimensionality of the output should be 1");if(w[0]!==b)throw new Error("Shape of the output should be equal to logical row count");if(v.length!==1)throw new Error("Dimensionality of the intermediate results should be 1");if(v[0]!==b)throw new Error("Shape of the intermediate results should be equal to logical row count");const x=`
- float process(int[${O}] indices) {
- int logical_row_start_offset = indices[0] * ${_};
-
- float norm_factor = 0.0;
- float max = _Max(indices);
- for(int i=0; i<${_}; ++i)
- {
- norm_factor += exp(getColorAsFloat(${(0,f.getGlsl)(g.session.backend.glContext.version).texture2D}(A, offsetToCoords(logical_row_start_offset + i,
- ${S}, ${A}))) - max);
- }
-
- return norm_factor;
- }`;return Object.assign(Object.assign({},u),{output:{dims:w,type:m.type,textureType:a.TextureType.unpacked},shaderSource:x})},i=(g,m,b,_,v,w)=>{const[S,A]=g.calculateTextureWidthAndHeight(m.dims,a.TextureType.unpacked),O=m.dims.length;if(b<1||_<1)throw new Error("Logical row count N and feature count D must be greater than or equal to 1");if(v.length!==1||w.length!==1)throw new Error("Dimensionality of the intermediate results should be 1");if(v[0]!==b||w[0]!==b)throw new Error("Shape of the intermediate results should be equal to logical row count");const x=`
- float process(int[${O}] indices) {
-
- // get offset of current logical tensor index from the 2-D texture coordinates (TexCoords)
- int offset = coordsToOffset(TexCoords, ${S}, ${A});
-
- //determine the logical row for this index
- int logical_row_index[1];
- logical_row_index[0] = offset / ${_};
-
- float norm_factor = _Norm(logical_row_index);
-
- // avoid possible division by 0
- // if norm_facor is 0, all elements are zero
- // if so, return 0
- if(norm_factor == 0.0)
- return 0.0;
-
- return exp(_A(indices) - _Max(logical_row_index)) / norm_factor;
- }`;return Object.assign(Object.assign({},s),{output:{dims:m.dims,type:m.type,textureType:a.TextureType.unpacked},shaderSource:x})},d=g=>{if(!g||g.length!==1)throw new Error("Softmax requires 1 input.");if(g[0].type!=="float32"&&g[0].type!=="float64")throw new Error("Invalid input type")}},5975:(y,n,o)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.parseSplitAttributes=n.split=void 0;const l=o(246),c=o(2517),f=o(2039),a={name:"Split",inputNames:["A"],inputTypes:[f.TextureType.unpacked]};n.split=(s,t,e)=>{u(t);const r=c.ShapeUtil.normalizeAxis(e.axis,t[0].dims.length),i=h(s,t,r,e),d=[];for(let g=0;gp(s,t[0],e,r,g)}),t));return d},n.parseSplitAttributes=s=>{const t=s.attributes.getInt("axis",0),e=s.attributes.getInts("split",[]),r=s.outputs.length;return(0,l.createAttributeWithCacheKey)({axis:t,split:e,numOutputs:r})};const h=(s,t,e,r)=>{const[,i]=c.SplitUtil.splitShape(t[0].dims,e,r.split,r.numOutputs);return i.length},p=(s,t,e,r,i)=>{const[d,g]=c.SplitUtil.splitShape(t.dims,r,e.split,e.numOutputs),m=g[i],b=d[i],_=`
- float process(int indices[${b.length}]) {
- indices[${r}] += ${m};
- return _A(indices);
- }
- `;return Object.assign(Object.assign({},a),{cacheHint:`${e.cacheKey}:${i}`,output:{dims:b,type:t.type,textureType:f.TextureType.unpacked},shaderSource:_})},u=s=>{if(!s||s.length!==1)throw new Error("Split requires one input.");if(s[0].type!=="int8"&&s[0].type!=="uint8"&&s[0].type!=="int16"&&s[0].type!=="uint16"&&s[0].type!=="int32"&&s[0].type!=="uint32"&&s[0].type!=="float32"&&s[0].type!=="float64"&&s[0].type!=="bool")throw new Error("Invalid input type.")}},3933:(y,n,o)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.parseSqueezeAttributes=n.squeezeV13=n.squeeze=void 0;const l=o(2517);n.squeeze=(a,h,p)=>{c(h);const u=l.ShapeUtil.squeezeShape(h[0].dims,p);return[a.reshapeUnpacked(h[0],u)]},n.squeezeV13=(a,h)=>(f(h),(0,n.squeeze)(a,[h[0]],Array.from(h[1].integerData))),n.parseSqueezeAttributes=a=>a.attributes.getInts("axes");const c=a=>{if(!a||a.length!==1)throw new Error("Squeeze requires 1 input.");if(a[0].type==="string")throw new Error("invalid input tensor types.")},f=a=>{if(!a||a.length!==2)throw new Error("Squeeze requires 2 inputs.");if(a[1].type!=="int32")throw new Error("Invalid input type.")}},6558:(y,n,o)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.sum=void 0;const l=o(5060),c=o(2039);n.sum=(h,p)=>{a(p);const u={name:"Sum",inputNames:p.map((s,t)=>`X${t}`),inputTypes:new Array(p.length).fill(c.TextureType.unpacked)};return[h.run(Object.assign(Object.assign({},u),{get:()=>f(h,p,u)}),p)]};const f=(h,p,u)=>{const s=(0,l.getGlsl)(h.session.backend.glContext.version),t=p[0].dims.slice(),e=`
- void main() {
- vec4 result = ${p.map((r,i)=>`${s.texture2D}(X${i},TexCoords)`).join(" + ")};
- ${s.output} = result;
- }
- `;return Object.assign(Object.assign({},u),{output:{dims:t,type:p[0].type,textureType:c.TextureType.unpacked},hasMain:!0,shaderSource:e})},a=h=>{if(!h||h.length===0)throw new Error("Sum requires inputs.");const p=h[0].dims.length;for(let u=1;u{Object.defineProperty(n,"__esModule",{value:!0}),n.tile=void 0;const l=o(782),c=o(2039);n.tile=(h,p)=>{a(p);const u={name:"Tile",inputNames:["A"],inputTypes:[c.TextureType.unpacked]};return[h.run(Object.assign(Object.assign({},u),{get:()=>f(h,p,u)}),p)]};const f=(h,p,u)=>{const s=p[0].dims.slice(),t=new Array(s.length),e=[];for(let d=0;d{if(!h||h.length!==2)throw new Error("Tile requires 2 input.");if(h[1].dims.length!==1)throw new Error("The second input shape must 1 dimension.");if(h[1].dims[0]!==h[0].dims.length)throw new Error("Invalid input shape.");if(l.NUMBER_TYPES.indexOf(h[0].type)===-1)throw new Error("Invalid input type.");if(h[1].type!=="int32"&&h[1].type!=="int16")throw new Error("Invalid repeat type.")}},3738:(y,n,o)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.parseTransposeAttributes=n.transpose=void 0;const l=o(246),c=o(2517),f=o(2039),a={name:"Transpose",inputNames:["A"],inputTypes:[f.TextureType.unpacked]};n.transpose=(e,r,i)=>(t(r),[e.run(Object.assign(Object.assign({},a),{cacheHint:i.cacheKey,get:()=>h(e,r[0],i.perm)}),r)]),n.parseTransposeAttributes=e=>(0,l.createAttributeWithCacheKey)({perm:e.attributes.getInts("perm",[])});const h=(e,r,i)=>{const d=r.dims;i=p(d,i);const g=u(d,i),m=d.length,b=`
- ${s("perm",i,m)}
- float process(int indices[${m}]) {
- int a[${m}];
- perm(a, indices);
- return _A(a);
- }`;return Object.assign(Object.assign({},a),{output:{dims:g,type:r.type,textureType:f.TextureType.unpacked},shaderSource:b})},p=(e,r)=>(r&&r.length!==e.length&&(r=[...e.keys()].reverse()),r),u=(e,r)=>(r=p(e,r),c.ShapeUtil.sortBasedOnPerm(e,r)),s=(e,r,i)=>{const d=[];d.push(`void ${e}(out int a[${i}], int src[${i}]) {`);for(let g=0;g{if(!e||e.length!==1)throw new Error("Transpose requires 1 input.");if(e[0].type!=="float32"&&e[0].type!=="float64")throw new Error("input should be float tensor")}},8710:(y,n,o)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.encodeAsUint8=void 0;const l=o(5060),c=o(2039);n.encodeAsUint8=(f,a)=>{const h=a.shape,p=(0,l.getGlsl)(f.session.backend.glContext.version),u=`
- const float FLOAT_MAX = 1.70141184e38;
- const float FLOAT_MIN = 1.17549435e-38;
-
- bool isNaN(float val) {
- return (val < 1.0 || 0.0 < val || val == 0.0) ? false : true;
- }
-
- highp vec4 encodeAsUint8(highp float v) {
- if (isNaN(v)) {
- return vec4(255, 255, 255, 255);
- }
-
- highp float av = abs(v);
-
- if(av < FLOAT_MIN) {
- return vec4(0.0, 0.0, 0.0, 0.0);
- } else if(v > FLOAT_MAX) {
- return vec4(0.0, 0.0, 128.0, 127.0) / 255.0;
- } else if(v < -FLOAT_MAX) {
- return vec4(0.0, 0.0, 128.0, 255.0) / 255.0;
- }
-
- highp vec4 c = vec4(0,0,0,0);
-
- highp float e = floor(log2(av));
- highp float m = exp2(fract(log2(av))) - 1.0;
-
- c[2] = floor(128.0 * m);
- m -= c[2] / 128.0;
- c[1] = floor(32768.0 * m);
- m -= c[1] / 32768.0;
- c[0] = floor(8388608.0 * m);
-
- highp float ebias = e + 127.0;
- c[3] = floor(ebias / 2.0);
- ebias -= c[3] * 2.0;
- c[2] += floor(ebias) * 128.0;
-
- c[3] += 128.0 * step(0.0, -v);
-
- return c / 255.0;
- }
-
- void main() {
- float value = ${p.texture2D}(X,TexCoords).r;
- ${p.output} = encodeAsUint8(value);
- }`,s={name:"Uint8Encode",inputTypes:[c.TextureType.unpacked],inputNames:["X"],output:{dims:h,type:a.tensor.type,textureType:c.TextureType.downloadUint8AsFloat},shaderSource:u,hasMain:!0};return f.executeProgram(s,[a.tensor])}},4909:(y,n,o)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.tanh=n.tan=n.sqrt=n.sin=n.sigmoid=n.relu=n.not=n.neg=n.log=n.parseLeakyReluAttributes=n.leakyRelu=n.identity=n.floor=n.exp=n.parseEluAttributes=n.elu=n.cos=n.ceil=n.clipV11=n.parseClipAttributes=n.clip=n.atan=n.asin=n.acos=n.abs=n.glslTanh=n.glslTan=n.glslSqrt=n.glslSigmoid=n.glslRelu=n.glslSin=n.glslNot=n.glslNeg=n.glslLog=n.glslLeakyRelu=n.glslIdentity=n.glslClip=n.glslFloor=n.glslExp=n.glslElu=n.glslCos=n.glslCeil=n.glslAtan=n.glslAsin=n.glslAcos=n.glslAbs=void 0;const l=o(246),c=o(2517),f=o(8520),a=o(5060),h=o(2039);function p(){return L("abs")}function u(){return L("acos")}function s(){return L("asin")}function t(){return L("atan")}function e(){return L("ceil")}function r(){return L("cos")}function i(M){const j="elu";return{body:`
- const float alpha = float(${M});
-
- float ${j}_(float a) {
- return a >= 0.0 ? a: (exp(a) - 1.0) * alpha;
- }
- vec4 ${j}_(vec4 v) {
- return vec4(${j}_(v.x), ${j}_(v.y), ${j}_(v.z), ${j}_(v.w));
- }
- `,name:j,type:f.FunctionType.ValueBased}}function d(){return L("exp")}function g(){return L("floor")}function m(M,j){const Z="clip";return{body:`
- const float min = float(${M});
- const float max = float(${j});
-
- float ${Z}_(float a) {
- return clamp(a, min, max);
- }
- vec4 ${Z}_(vec4 v) {
- return clamp(v, min, max);
- }
- `,name:Z,type:f.FunctionType.ValueBased}}function b(){const M="indentity";return{body:`
- float ${M}_(float a) {
- return a;
- }
- vec4 ${M}_(vec4 v) {
- return v;
- }
- `,name:M,type:f.FunctionType.ValueBased}}function _(M){const j="leakyRelu";return{body:`
- const float alpha = float(${M});
-
- float ${j}_(float a) {
- return a < 0.0 ? a * alpha : a;
- }
- vec4 ${j}_(vec4 v) {
- return vec4(${j}_(v.x), ${j}_(v.y), ${j}_(v.z), ${j}_(v.w));
- }
- `,name:j,type:f.FunctionType.ValueBased}}function v(){return L("log")}function w(){const M="neg";return{body:`
- float ${M}_(float a) {
- return -a;
- }
- vec4 ${M}_(vec4 v) {
- return -v;
- }
- `,name:M,type:f.FunctionType.ValueBased}}function S(){const M="not";return{body:`
- float ${M}_(float a) {
- return float( ! bool(a) );
- }
- bool ${M}_(bool a) {
- return !a;
- }
- vec4 ${M}_(vec4 v) {
- return vec4(!bool(v.x), !bool(v.y), !bool(v.z), !bool(v.w));
- }
- bvec4 ${M}_(bvec4 v) {
- return bvec4(!v.x, !v.y, !v.z, !v.w);
- }
- `,name:M,type:f.FunctionType.ValueBased}}function A(){return L("sin")}function O(){const M="relu";return{body:`
- float ${M}_(float a) {
- return max( a, 0.0 );
- }
- vec4 ${M}_(vec4 v) {
- return max( v, 0.0 );
- }
- `,name:M,type:f.FunctionType.ValueBased}}function x(){const M="sigmoid";return{body:`
- float ${M}_(float a) {
- return 1.0 / (1.0 + exp(-a));
- }
- vec4 ${M}_(vec4 v) {
- return 1.0 / (1.0 + exp(-v));
- }
- `,name:M,type:f.FunctionType.ValueBased}}function I(){return L("sqrt")}function $(){return L("tan")}function B(){const M="tanh";return{body:`
- float ${M}_(float a) {
- a = clamp(a, -10., 10.);
- a = exp(2.*a);
- return (a - 1.) / (a + 1.);
- }
- vec4 ${M}_(vec4 v) {
- v = clamp(v, -10., 10.);
- v = exp(2.*v);
- return (v - 1.) / (v + 1.);
- }
- `,name:M,type:f.FunctionType.ValueBased}}function L(M){return{body:`
- float ${M}_(float a) {
- return ${M}(a);
- }
- vec4 ${M}_(vec4 v) {
- return ${M}(v);
- }
- `,name:M,type:f.FunctionType.ValueBased}}n.glslAbs=p,n.glslAcos=u,n.glslAsin=s,n.glslAtan=t,n.glslCeil=e,n.glslCos=r,n.glslElu=i,n.glslExp=d,n.glslFloor=g,n.glslClip=m,n.glslIdentity=b,n.glslLeakyRelu=_,n.glslLog=v,n.glslNeg=w,n.glslNot=S,n.glslSin=A,n.glslRelu=O,n.glslSigmoid=x,n.glslSqrt=I,n.glslTan=$,n.glslTanh=B;const N=(M,j,Z,X)=>{const Q=M.session.pack?h.TextureType.packed:h.TextureType.unpacked,ee={name:Z.name,inputTypes:[Q],inputNames:["A"],cacheHint:X};return Object.assign(Object.assign({},ee),{get:()=>((ue,Ae,xe,oe)=>{const we=ue.session.pack?h.TextureType.packed:h.TextureType.unpacked,ye=(0,a.getGlsl)(ue.session.backend.glContext.version);return Object.assign(Object.assign({},Ae),{output:{dims:xe.dims,type:xe.type,textureType:we},shaderSource:`
- ${oe.body}
- void main() {
- vec4 v = ${ye.texture2D}(A, TexCoords);
- v = ${oe.name}_(v);
- ${ye.output} = v;
- }
- `,hasMain:!0})})(M,ee,j,Z)})};n.abs=(M,j)=>[M.run(N(M,j[0],p()),j)],n.acos=(M,j)=>[M.run(N(M,j[0],u()),j)],n.asin=(M,j)=>[M.run(N(M,j[0],s()),j)],n.atan=(M,j)=>[M.run(N(M,j[0],t()),j)],n.clip=(M,j,Z)=>[M.run(N(M,j[0],m(Z.min,Z.max),Z.cacheKey),j)],n.parseClipAttributes=M=>(0,l.createAttributeWithCacheKey)({min:M.attributes.getFloat("min",c.MIN_CLIP),max:M.attributes.getFloat("max",c.MAX_CLIP)}),n.clipV11=(M,j)=>{const Z=H(M,j);return(0,n.clip)(M,[j[0]],Z)};const H=(M,j)=>{if(j.length>=3&&(!M.session.isInitializer(j[1].dataId)||!M.session.isInitializer(j[2].dataId)))throw new Error("dynamic clip attributes are not allowed");const Z=j.length>=3?j[1].numberData[0]:c.MIN_CLIP,X=j.length>=3?j[2].numberData[0]:c.MAX_CLIP;return(0,l.createAttributeWithCacheKey)({min:Z,max:X})};n.ceil=(M,j)=>[M.run(N(M,j[0],e()),j)],n.cos=(M,j)=>[M.run(N(M,j[0],r()),j)],n.elu=(M,j,Z)=>[M.run(N(M,j[0],i(Z.alpha),Z.cacheKey),j)],n.parseEluAttributes=M=>(0,l.createAttributeWithCacheKey)({alpha:M.attributes.getFloat("alpha",1)}),n.exp=(M,j)=>[M.run(N(M,j[0],d()),j)],n.floor=(M,j)=>[M.run(N(M,j[0],g()),j)],n.identity=(M,j)=>[M.run(N(M,j[0],b()),j)],n.leakyRelu=(M,j,Z)=>[M.run(N(M,j[0],_(Z.alpha),Z.cacheKey),j)],n.parseLeakyReluAttributes=M=>(0,l.createAttributeWithCacheKey)({alpha:M.attributes.getFloat("alpha",.01)}),n.log=(M,j)=>[M.run(N(M,j[0],v()),j)],n.neg=(M,j)=>[M.run(N(M,j[0],w()),j)],n.not=(M,j)=>[M.run(N(M,j[0],S()),j)],n.relu=(M,j)=>[M.run(N(M,j[0],O()),j)],n.sigmoid=(M,j)=>[M.run(N(M,j[0],x()),j)],n.sin=(M,j)=>[M.run(N(M,j[0],A()),j)],n.sqrt=(M,j)=>[M.run(N(M,j[0],I()),j)],n.tan=(M,j)=>[M.run(N(M,j[0],$()),j)],n.tanh=(M,j)=>[M.run(N(M,j[0],B()),j)]},5611:(y,n,o)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.createUnpackProgramInfoLoader=n.createUnpackProgramInfo=void 0;const l=o(5060),c=o(2039),f=o(9390),a=o(2827),h={name:"unpack",inputNames:["A"],inputTypes:[c.TextureType.packed]};n.createUnpackProgramInfo=(p,u)=>{const s=u.dims.length,t=(0,a.getChannels)("rc",s),e=t.slice(-2),r=(0,f.getCoordsDataType)(s),i=(0,a.unpackFromChannel)(),d=u.dims.length===0?"":function(b,_){if(b===1)return"rc";let v="";for(let w=0;wObject.assign(Object.assign({},h),{get:()=>(0,n.createUnpackProgramInfo)(p,u)})},8428:(y,n,o)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.parseUnsqueezeAttributes=n.unsqueezeV13=n.unsqueeze=void 0;const l=o(2517);n.unsqueeze=(a,h,p)=>{c(h);const u=l.ShapeUtil.unsqueezeShape(h[0].dims,p);return[a.reshapeUnpacked(h[0],u)]},n.unsqueezeV13=(a,h)=>(f(h),(0,n.unsqueeze)(a,[h[0]],Array.from(h[1].integerData))),n.parseUnsqueezeAttributes=a=>a.attributes.getInts("axes");const c=a=>{if(!a||a.length!==1)throw new Error("Unsqueeze requires 1 input.");if(a[0].type==="string")throw new Error("invalid input tensor types.")},f=a=>{if(!a||a.length!==2)throw new Error("Unsqueeze requires 2 inputs.");if(a[1].type!=="int32")throw new Error("Invalid input type.")}},9793:(y,n,o)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.scalesValidation=n.validateInputs=n.parseUpsampleAttributes=n.parseUpsampleAttributesV9=n.parseUpsampleAttributesV7=n.upsample=void 0;const l=o(246),c=o(5060),f=o(2039),a={name:"Upsample",inputNames:["X"],inputTypes:[f.TextureType.unpacked]};n.upsample=(p,u,s)=>((0,n.validateInputs)(u,s),[p.run(Object.assign(Object.assign({},a),{cacheHint:s.cacheKey,get:()=>h(p,u,s)}),u)]),n.parseUpsampleAttributesV7=p=>(0,n.parseUpsampleAttributes)(p,7),n.parseUpsampleAttributesV9=p=>(0,n.parseUpsampleAttributes)(p,9),n.parseUpsampleAttributes=(p,u)=>{const s=u>=10,t=p.attributes.getString("mode","nearest");if(t!=="nearest"&&t!=="linear"&&(u<11||t!=="cubic"))throw new Error(`unrecognized mode: ${t}`);let e=[];u<9&&(e=p.attributes.getFloats("scales"),(0,n.scalesValidation)(e,t,s));const r=p.attributes.getFloat("extrapolation_value",0),i=u>10?p.attributes.getString("coordinate_transformation_mode","half_pixel"):"asymmetric";if(["asymmetric","pytorch_half_pixel","tf_half_pixel_for_nn","align_corners","tf_crop_and_resize","half_pixel"].indexOf(i)===-1)throw new Error(`coordinate_transform_mode '${i}' is not supported`);const d=i==="tf_crop_and_resize",g=d,m=t==="nearest"&&u>=11?p.attributes.getString("nearest_mode","round_prefer_floor"):"";if(["round_prefer_floor","round_prefer_ceil","floor","ceil",""].indexOf(m)===-1)throw new Error(`nearest_mode '${m}' is not supported`);const b=p.attributes.getFloat("cubic_coeff_a",-.75),_=p.attributes.getInt("exclude_outside",0)!==0;if(_&&t!=="cubic")throw new Error("exclude_outside can be set to 1 only when mode is CUBIC.");const v=u<11||t==="nearest"&&i==="asymmetric"&&m==="floor";let w=0,S=0,A=0;return u>10?p.inputs.length>2?(w=1,S=2,A=3):(S=1,A=2):u===9&&(S=1),(0,l.createAttributeWithCacheKey)({opset:u,isResize:s,mode:t,scales:e,extrapolationValue:r,coordinateTransformMode:i,useExtrapolation:g,needRoiInput:d,nearestMode:m,cubicCoefficientA:b,excludeOutside:_,useNearest2xOptimization:v,roiInputIdx:w,scalesInputIdx:S,sizesInputIdx:A})};const h=(p,u,s)=>{const t=(0,c.getGlsl)(p.session.backend.glContext.version),[e,r]=p.calculateTextureWidthAndHeight(u[0].dims,f.TextureType.unpacked),i=u[0].dims.map((A,O)=>Math.floor(A*s.scales[O])),[d,g]=p.calculateTextureWidthAndHeight(i,f.TextureType.unpacked),m=i.length,b=new Array(m),_=new Array(m);let v=`
- int output_pitches[${m}];
- int input_pitches[${m}];
- `;for(let A=m-1;A>=0;A--)b[A]=A===m-1?1:b[A+1]*i[A+1],_[A]=A===m-1?1:_[A+1]*u[0].dims[A+1],v+=`
- output_pitches[${A}] = ${b[A]};
- input_pitches[${A}] = ${_[A]};
- `;const w=`
- float getInputFloat(int index) {
- vec2 coords = offsetToCoords(index, ${e}, ${r});
- float value = getColorAsFloat(${t.texture2D}(X, coords));
- return value;
- }
- `,S=s.mode==="nearest"?`
- ${w}
- float process(int indices[${m}]) {
- int input_index = 0;
- int output_index = coordsToOffset(TexCoords, ${d}, ${g});
-
- ${v}
-
- int d, m;
- for (int dim = 0; dim < ${m}; ++dim) {
- d = output_index / output_pitches[dim];
- m = output_index - d * output_pitches[dim];
- output_index = m;
-
- if (scales[dim] != 1 && d > 0) {
- int d2 = d / scales[dim];
- m = d - d2 * scales[dim];
- d = d2;
- }
- input_index += input_pitches[dim] * d;
- }
-
- return getInputFloat(input_index);
- }`:m===4?`
- ${w}
- float process(int indices[4]) {
- int input_index = 0;
- int output_index = coordsToOffset(TexCoords, ${d}, ${g});
-
- ${v}
-
- int m;
- int index_of_dim0, index_of_dim1, index_of_dim2, index_of_dim3;
- index_of_dim0 = output_index / output_pitches[0];
- m = output_index - index_of_dim0 * output_pitches[0];
- index_of_dim1 = m / output_pitches[1];
- m = m - index_of_dim1 * output_pitches[1];
- index_of_dim2 = m / output_pitches[2];
- m = m - index_of_dim2 * output_pitches[2];
- index_of_dim3 = m;
-
- int index_of_input_dim2, index_of_input_dim3, x_offset, y_offset;
- index_of_input_dim2 = index_of_dim2 / scales[2];
- y_offset = index_of_dim2 - index_of_input_dim2 * scales[2];
- index_of_input_dim3 = index_of_dim3 / scales[3];
- x_offset = index_of_dim3 - index_of_input_dim3 * scales[3];
-
- input_index = index_of_dim0 * input_pitches[0] +
- index_of_dim1 * input_pitches[1] +
- index_of_input_dim2 * input_pitches[2] +
- index_of_input_dim3;
-
- float x00 = getInputFloat(input_index);
- float x10, x01, x11;
-
- bool end_of_dim2 = false;
- if (index_of_input_dim2 == (${u[0].dims[2]} - 1)) {
- // It's the end in dimension 2
- x01 = x00;
- end_of_dim2 = true;
- } else {
- x01 = getInputFloat(input_index + input_pitches[2]);
- }
-
- if (index_of_input_dim3 == (input_pitches[2] - 1)) {
- // It's the end in dimension 3
- x10 = x00;
- x11 = x01;
- }
- else {
- x10 = getInputFloat(input_index + 1);
- x11 = end_of_dim2 ? x10 : getInputFloat(input_index + input_pitches[2] + 1);
- }
-
- float y0 = x00 + float(y_offset) * (x01 - x00) / float(scales[2]);
- float y1 = x10 + float(y_offset) * (x11 - x10) / float(scales[2]);
- return y0 + float(x_offset) * (y1 - y0) / float(scales[3]);
- }`:`
- ${w}
- float process(int indices[2]) {
- int input_index = 0;
- int output_index = coordsToOffset(TexCoords, ${d}, ${g});
-
- ${v}
-
- int m;
- int index_of_dim0, index_of_dim1;
- index_of_dim0 = output_index / output_pitches[0];
- m = output_index - index_of_dim0 * output_pitches[0];
- index_of_dim1 = m;
-
- int index_of_input_dim0, index_of_input_dim1, x_offset, y_offset;
- index_of_input_dim0 = index_of_dim0 / scales[0];
- y_offset = index_of_dim0 - index_of_input_dim0 * scales[0];
- index_of_input_dim1 = index_of_dim1 / scales[1];
- x_offset = index_of_dim1 - index_of_input_dim1 * scales[1];
-
- input_index = index_of_input_dim0 * input_pitches[0] + index_of_input_dim1;
-
- float x00 = getInputFloat(input_index);
- float x10, x01, x11;
-
- bool end_of_dim0 = false;
- if (index_of_input_dim0 == (${u[0].dims[0]} - 1)) {
- // It's the end in dimension 0
- x01 = x00;
- end_of_dim0 = true;
- } else {
- x01 = getInputFloat(input_index + input_pitches[0]);
- }
-
- if (index_of_input_dim1 == (input_pitches[0] - 1)) {
- // It's the end in dimension 1
- x10 = x00;
- x11 = x01;
- }
- else {
- x10 = getInputFloat(input_index + 1);
- x11 = end_of_dim0 ? x10 : getInputFloat(input_index + input_pitches[0] + 1);
- }
-
- float y0 = x00 + float(y_offset) * (x01 - x00) / float(scales[0]);
- float y1 = x10 + float(y_offset) * (x11 - x10) / float(scales[0]);
- return y0 + float(x_offset) * (y1 - y0) / float(scales[1]);
- }`;return Object.assign(Object.assign({},a),{output:{dims:i,type:u[0].type,textureType:f.TextureType.unpacked},shaderSource:S,variables:[{name:"scales",type:"int",arrayLength:s.scales.length,data:s.scales.map(A=>Math.ceil(A))}]})};n.validateInputs=(p,u)=>{if(!p||u.opset<9&&p.length!==1||u.opset>=9&&u.opset<11&&p.length!==2||u.opset>=11&&p.length<2)throw new Error("invalid inputs.");if(u.scales.length>0&&p[0].dims.length!==u.scales.length)throw new Error("Invalid input shape.");if(p[0].type==="string")throw new Error("Invalid input tensor types.")},n.scalesValidation=(p,u,s)=>{if(s){for(const t of p)if(t<=0)throw new Error("Scale value should be greater than 0.")}else for(const t of p)if(t<1)throw new Error("Scale value should be greater than or equal to 1.");if(!(u!=="linear"&&u!=="cubic"||p.length===2||p.length===4&&p[0]===1&&p[1]===1))throw new Error(`'Linear' mode and 'Cubic' mode only support 2-D inputs ('Bilinear', 'Bicubic') or 4-D inputs with the corresponding outermost 2 scale values being 1 in the ${s?"Resize":"Upsample"} opeartor.`)}},1958:(y,n,o)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.ProgramManager=void 0;const l=o(1670),c=o(6231),f=o(8879),a=o(5060);n.ProgramManager=class{constructor(h,p,u){this.profiler=h,this.glContext=p,this.textureLayoutStrategy=u,this.repo=new Map,this.attributesBound=!1}getArtifact(h){return this.repo.get(h)}setArtifact(h,p){this.repo.set(h,p)}run(h,p,u){var s;this.profiler.event("op",`ProgramManager.run ${(s=h.programInfo.name)!==null&&s!==void 0?s:"unknown kernel"}`,()=>{var t;const e=this.glContext.gl,r=h.program;e.useProgram(r);try{this.bindOutput(u),this.attributesBound||this.bindAttributes(h.attribLocations),this.bindUniforms(h.uniformLocations,(t=h.programInfo.variables)!==null&&t!==void 0?t:[],p)}catch(i){throw c.Logger.error("ProgramManager",h.programInfo.shaderSource),i}this.profiler.event("backend","GlContext.draw()",()=>{this.glContext.draw()})},this.glContext)}dispose(){this.vertexShader&&this.glContext.deleteShader(this.vertexShader),this.repo.forEach(h=>this.glContext.deleteProgram(h.program))}build(h,p,u){return this.profiler.event("backend","ProgramManager.build",()=>{const s=new f.GlslPreprocessor(this.glContext,h,p,u),t=s.preprocess(),e=this.compile(t);return{programInfo:h,program:e,uniformLocations:this.getUniformLocations(e,s.context.programInfo.inputNames,s.context.programInfo.variables),attribLocations:this.getAttribLocations(e)}})}compile(h){if(!this.vertexShader){c.Logger.verbose("ProrgramManager","Compiling and caching Vertex shader for the first time");const s=(0,a.getVertexShaderSource)(this.glContext.version);this.vertexShader=this.glContext.compileShader(s,this.glContext.gl.VERTEX_SHADER)}l.env.debug&&c.Logger.verbose("ProrgramManager",`FragShader:
-${h}
-`);const p=this.glContext.compileShader(h,this.glContext.gl.FRAGMENT_SHADER),u=this.glContext.createProgram(this.vertexShader,p);return this.glContext.deleteShader(p),u}bindOutput(h){const p=h.width,u=h.height;c.Logger.verbose("ProrgramManager",`Binding output texture to Framebuffer: w/h=${p}/${u}, shape=${h.shape}, type=${h.tensor.type}`),this.glContext.attachFramebuffer(h.texture,p,u)}bindAttributes(h){const p=h.position,u=h.textureCoord;this.glContext.setVertexAttributes(p,u),this.attributesBound=!0}bindUniforms(h,p,u){var s;const t=this.glContext.gl;let e=0;for(const{name:r,type:i,location:d,arrayLength:g}of h){const m=(s=p.find(b=>b.name===r))===null||s===void 0?void 0:s.data;if(i!=="sampler2D"&&!m)throw new Error(`variable '${r}' does not have data defined in program info`);switch(i){case"sampler2D":this.bindTexture(u[e],d,e),e++;break;case"float":g?t.uniform1fv(d,m):t.uniform1f(d,m);break;case"int":g?t.uniform1iv(d,m):t.uniform1i(d,m);break;default:throw new Error(`Uniform not implemented: ${i}`)}}}bindTexture(h,p,u){this.glContext.bindTextureToUniform(h.texture,u,p)}getAttribLocations(h){return{position:this.getAttribLocation(h,"position"),textureCoord:this.getAttribLocation(h,"textureCoord")}}getUniformLocations(h,p,u){const s=[];if(p)for(const t of p)s.push({name:t,type:"sampler2D",location:this.getUniformLocation(h,t)});if(u)for(const t of u)s.push(Object.assign(Object.assign({},t),{location:this.getUniformLocation(h,t.name)}));return s}getUniformLocation(h,p){const u=this.glContext.gl.getUniformLocation(h,p);if(u===null)throw new Error(`Uniform ${p} not found.`);return u}getAttribLocation(h,p){return this.glContext.gl.getAttribLocation(h,p)}}},6416:(y,n,o)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.WebGLSessionHandler=void 0;const l=o(6231),c=o(1047),f=o(8316),a=o(1640),h=o(1958),p=o(7859),u=o(5702);n.WebGLSessionHandler=class{constructor(s,t){this.backend=s,this.context=t,this.layoutStrategy=new p.PreferLogicalStrategy(s.glContext.maxTextureSize),this.programManager=new h.ProgramManager(this.context.profiler,s.glContext,this.layoutStrategy),this.textureManager=new u.TextureManager(s.glContext,this.layoutStrategy,this.context.profiler,{reuseTextures:s.textureCacheMode==="full"}),this.packedTextureDataCache=new Map,this.unpackedTextureDataCache=new Map,this.pack=s.pack,this.pack2unpackMap=new Map,this.unpack2packMap=new Map}createInferenceHandler(){return new f.WebGLInferenceHandler(this)}onGraphInitialized(s){const t=s.getValues().filter(e=>e.from===-1&&e.tensor).map(e=>e.tensor.dataId);this.initializers=new Set(t)}isInitializer(s){return!!this.initializers&&this.initializers.has(s)}addInitializer(s){this.initializers.add(s)}getTextureData(s,t){return t?this.packedTextureDataCache.get(s):this.unpackedTextureDataCache.get(s)}setTextureData(s,t,e=!1){l.Logger.verbose("WebGLSessionHandler","Storing Texture data in cache"),e?this.packedTextureDataCache.set(s,t):this.unpackedTextureDataCache.set(s,t)}dispose(){this.programManager.dispose(),this.textureManager.clearActiveTextures(),this.packedTextureDataCache.forEach(s=>this.textureManager.releaseTexture(s,!0)),this.packedTextureDataCache=new Map,this.unpackedTextureDataCache.forEach(s=>this.textureManager.releaseTexture(s,!0)),this.unpackedTextureDataCache=new Map}resolve(s,t,e){const r=(0,c.resolveOperator)(s,t,a.WEBGL_OP_RESOLVE_RULES);return{impl:r.opImpl,context:r.opInit?r.opInit(s,e):s}}}},7769:(y,n,o)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.Uint8DataEncoder=n.RGBAFloatDataEncoder=n.RedFloat32DataEncoder=void 0;const l=o(6231);n.RedFloat32DataEncoder=class{constructor(c,f=1){if(f===1)this.internalFormat=c.R32F,this.format=c.RED,this.textureType=c.FLOAT,this.channelSize=f;else{if(f!==4)throw new Error(`Invalid number of channels: ${f}`);this.internalFormat=c.RGBA32F,this.format=c.RGBA,this.textureType=c.FLOAT,this.channelSize=f}}encode(c,f){let a,h;return c.constructor!==Float32Array&&(l.Logger.warning("Encoder","data was not of type Float32; creating new Float32Array"),h=new Float32Array(c)),f*this.channelSize>c.length?(l.Logger.warning("Encoder","Source data too small. Allocating larger array"),h=c,a=this.allocate(f*this.channelSize),h.forEach((p,u)=>a[u]=p)):(h=c,a=h),a}allocate(c){return new Float32Array(4*c)}decode(c,f){return this.channelSize===1?c.filter((a,h)=>h%4==0).subarray(0,f):c.subarray(0,f)}},n.RGBAFloatDataEncoder=class{constructor(c,f=1,a){if(f!==1&&f!==4)throw new Error(`Invalid number of channels: ${f}`);this.internalFormat=c.RGBA,this.format=c.RGBA,this.channelSize=f,this.textureType=a||c.FLOAT}encode(c,f){let a=c;return this.channelSize===1&&(l.Logger.verbose("Encoder","Exploding into a larger array"),a=this.allocate(f),c.forEach((h,p)=>a[4*p]=h)),a}allocate(c){return new Float32Array(4*c)}decode(c,f){return this.channelSize===1?c.filter((a,h)=>h%4==0).subarray(0,f):c.subarray(0,f)}},n.Uint8DataEncoder=class{constructor(c,f=1){if(this.channelSize=4,f===1)this.internalFormat=c.ALPHA,this.format=c.ALPHA,this.textureType=c.UNSIGNED_BYTE,this.channelSize=f;else{if(f!==4)throw new Error(`Invalid number of channels: ${f}`);this.internalFormat=c.RGBA,this.format=c.RGBA,this.textureType=c.UNSIGNED_BYTE,this.channelSize=f}}encode(c,f){return new Uint8Array(c.buffer,c.byteOffset,c.byteLength)}allocate(c){return new Uint8Array(c*this.channelSize)}decode(c,f){if(c instanceof Uint8Array)return c.subarray(0,f);throw new Error(`Invalid array type: ${c.constructor}`)}}},7859:(y,n,o)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.getBatchDim=n.sizeToSquarishShape=n.getRowsCols=n.sizeFromShape=n.isInt=n.parseAxisParam=n.squeezeShape=n.PreferLogicalStrategy=n.AlwaysKeepOriginalSizeStrategy=void 0;const l=o(6231),c=o(2517);function f(s,t){const e=[],r=[],i=t!=null&&Array.isArray(t)&&t.length===0,d=t==null||i?null:a(t,s).sort();let g=0;for(let m=0;mm)&&s[m]===1&&(e.push(s[m]),r.push(m)),d[g]<=m&&g++}s[m]!==1&&(e.push(s[m]),r.push(m))}return{newShape:e,keptDims:r}}function a(s,t){const e=t.length;return s=s==null?t.map((r,i)=>i):[].concat(s),(0,c.assert)(s.every(r=>r>=-e&&r`All values in axis param must be in range [-${e}, ${e}) but got axis ${s}`),(0,c.assert)(s.every(h),()=>`All values in axis param must be integers but got axis ${s}`),s.map(r=>r<0?e+r:r)}function h(s){return s%1==0}function p(s){if(s.length===0)return 1;let t=s[0];for(let e=1;e=s.length?1:s.slice(t.breakAxis).reduce((m,b)=>m*b),g=t.breakAxis<=0?1:s.slice(0,t.breakAxis).reduce((m,b)=>m*b);if(!(d>e||g>e))return[d,g];l.Logger.verbose("TextureLayout",`Given width/height preferences were unattainable: shape:${s}, breakAxis:${t.breakAxis}`)}const r=s.reduce((d,g)=>d*g);let i=Math.floor(Math.sqrt(r));for(;i=e||r%i!=0)throw new Error(`The given dimensions are outside this GPU's boundaries: ${s}`);return[i,r/i]}},n.PreferLogicalStrategy=class{constructor(s){this.maxTextureSize=s}computeTextureWH(s,t){const e=this.computeTexture(s,t);return t&&t.isPacked&&(e[0]/=2,e[1]/=2),t&&t.reverseWH?[e[1],e[0]]:e}computeTexture(s,t){const e=t&&t.isPacked;if(s.length===0)return e?[2,2]:[1,1];let r=this.maxTextureSize;if(t&&t.breakAxis!==void 0){const g=t.breakAxis>=s.length?1:s.slice(t.breakAxis).reduce((b,_)=>b*_),m=t.breakAxis<=0?1:s.slice(0,t.breakAxis).reduce((b,_)=>b*_);if(!(g>r||m>r))return[g,m];l.Logger.verbose("TextureLayout",`Given width/height preferences were unattainable: shape:${s}, breakAxis:${t.breakAxis}`)}let i=s.slice(0);e&&(r*=2,i=i.map((g,m)=>m>=i.length-2?i[m]%2==0?i[m]:i[m]+1:i[m]),i.length===1&&(i=[2,i[0]])),i.length!==2&&(i=f(i).newShape);const d=p(i);return i.length<=1&&d<=r?[1,d]:i.length===2&&i[0]<=r&&i[1]<=r?i:i.length===3&&i[0]*i[1]<=r&&i[2]<=r?[i[0]*i[1],i[2]]:i.length===3&&i[0]<=r&&i[1]*i[2]<=r?[i[0],i[1]*i[2]]:i.length===4&&i[0]*i[1]*i[2]<=r&&i[3]<=r?[i[0]*i[1]*i[2],i[3]]:i.length===4&&i[0]<=r&&i[1]*i[2]*i[3]<=r?[i[0],i[1]*i[2]*i[3]]:e?u(d/4).map(g=>2*g):u(d)}},n.squeezeShape=f,n.parseAxisParam=a,n.isInt=h,n.sizeFromShape=p,n.getRowsCols=function(s){if(s.length===0)throw Error("Cannot get rows and columns of an empty shape array.");return[s.length>1?s[s.length-2]:1,s[s.length-1]]},n.sizeToSquarishShape=u,n.getBatchDim=function(s,t=2){return p(s.slice(0,s.length-t))}},4057:(y,n,o)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.createTextureLayoutFromShape=n.calculateTextureWidthAndHeight=n.createTextureLayoutFromTextureType=void 0;const l=o(2517),c=o(2039);n.createTextureLayoutFromTextureType=(f,a,h)=>{const p=h===c.TextureType.unpacked||h===c.TextureType.unpackedReversed?1:4,u=h===c.TextureType.packed,s=h===c.TextureType.unpackedReversed||h===c.TextureType.packed,t=h===c.TextureType.packedLastDimension?a.length-1:void 0,e=h===c.TextureType.packedLastDimension?a.map((r,i)=>i===a.length-1?4*r:r):void 0;return(0,n.createTextureLayoutFromShape)(f,a,p,e,{isPacked:u,reverseWH:s,breakAxis:t})},n.calculateTextureWidthAndHeight=(f,a,h)=>{const p=(0,n.createTextureLayoutFromTextureType)(f,a,h);return[p.width,p.height]},n.createTextureLayoutFromShape=(f,a,h=1,p,u)=>{const s=!(!u||!u.isPacked),[t,e]=f.computeTextureWH(s&&p||a,u),r=a.length;let i=a.slice(0);if(r===0&&(i=[1]),h===1)p=a;else if(s){if(h!==4)throw new Error("a packed texture must be 4-channel");p=a,r>0&&(i[r-1]=Math.ceil(i[r-1]/2)),r>1&&(i[r-2]=Math.ceil(i[r-2]/2))}else if(!p)throw new Error("Unpacked shape is needed when using channels > 1");return{width:t,height:e,channels:h,isPacked:s,shape:i,strides:l.ShapeUtil.computeStrides(i),unpackedShape:p,reversedWH:u&&u.reverseWH}}},5702:(y,n,o)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.TextureManager=void 0;const l=o(6231);n.TextureManager=class{constructor(c,f,a,h){this.glContext=c,this.layoutStrategy=f,this.profiler=a,this.config=h,this.pendingRead=new Map,h.reuseTextures&&(this.inUseTextures=new Map,this.idleTextures=new Map,this.textureLookup=new Map)}createTextureFromLayout(c,f,a,h){const p=this.toEncoderType(c),u=this.glContext.getEncoder(p,f.channels||1,h);if(f.isPacked&&h===1)throw new Error("not implemented");const s=f.width,t=f.height;let e,r;if(this.config.reuseTextures){e=`${s}x${t}_${u.format}_${u.internalFormat}_${u.textureType}`,r=this.inUseTextures.get(e),r||(r=[],this.inUseTextures.set(e,r));const d=this.idleTextures.get(e);if(d&&d.length>0){const g=d.pop();return r.push(g),h===1&&this.glContext.updateTexture(g,s,t,u,this.toTextureData(c,a)),g}}l.Logger.verbose("TextureManager",`Creating new texture of size ${f.width}x${f.height}`);const i=this.glContext.allocateTexture(s,t,u,this.toTextureData(c,a));return this.config.reuseTextures&&(r.push(i),this.textureLookup.set(i,e)),i}readTexture(c,f,a){return a||(a=1),this.profiler.event("backend","TextureManager.readTexture",()=>{const h=c.shape.reduce((u,s)=>u*s)*a,p=this.glContext.readTexture(c.texture,c.width,c.height,h,this.toEncoderType(f),a);return this.toTensorData(f,p)})}async readTextureAsync(c,f,a){const h=c.tensor.dataId;if(a||(a=1),this.pendingRead.has(h)){const p=this.pendingRead.get(h);return new Promise(u=>p==null?void 0:p.push(u))}return this.profiler.event("backend","TextureManager.readTextureAsync",async()=>{this.pendingRead.set(h,[]);const p=c.shape.reduce((e,r)=>e*r)*a;await this.glContext.createAndWaitForFence();const u=this.glContext.readTexture(c.texture,c.width,c.height,p,this.toEncoderType(f),a),s=this.toTensorData(f,u),t=this.pendingRead.get(h);return this.pendingRead.delete(h),t==null||t.forEach(e=>e(s)),s})}readUint8TextureAsFloat(c){return this.profiler.event("backend","TextureManager.readUint8TextureAsFloat",()=>{const f=c.shape.reduce((h,p)=>h*p),a=this.glContext.readTexture(c.texture,c.width,c.height,4*f,"byte",4);return new Float32Array(a.buffer,a.byteOffset,f)})}releaseTexture(c,f){let a;if(this.config.reuseTextures&&(a=this.textureLookup.get(c.texture),a)){f&&this.textureLookup.delete(a);const h=this.inUseTextures.get(a);if(h){const p=h.indexOf(c.texture);if(p!==-1){h.splice(p,1);let u=this.idleTextures.get(a);u||(u=[],this.idleTextures.set(a,u)),u.push(c.texture)}}}a&&!f||(l.Logger.verbose("TextureManager",`Deleting texture of size ${c.width}x${c.height}`),this.glContext.deleteTexture(c.texture))}toTensorData(c,f){switch(c){case"int16":return f instanceof Int16Array?f:Int16Array.from(f);case"int32":return f instanceof Int32Array?f:Int32Array.from(f);case"int8":return f instanceof Int8Array?f:Int8Array.from(f);case"uint16":return f instanceof Uint16Array?f:Uint16Array.from(f);case"uint32":return f instanceof Uint32Array?f:Uint32Array.from(f);case"uint8":case"bool":return f instanceof Uint8Array?f:Uint8Array.from(f);case"float32":return f instanceof Float32Array?f:Float32Array.from(f);case"float64":return f instanceof Float64Array?f:Float64Array.from(f);default:throw new Error(`TensorData type ${c} is not supported`)}}toTextureData(c,f){if(f)return f instanceof Float32Array?f:new Float32Array(f)}toEncoderType(c){return"float"}clearActiveTextures(){this.glContext.clearActiveTextures()}}},2039:(y,n)=>{var o;Object.defineProperty(n,"__esModule",{value:!0}),n.TextureType=void 0,(o=n.TextureType||(n.TextureType={}))[o.unpacked=0]="unpacked",o[o.unpackedReversed=1]="unpackedReversed",o[o.packed=2]="packed",o[o.downloadUint8AsFloat=3]="downloadUint8AsFloat",o[o.packedLastDimension=4]="packedLastDimension"},9390:(y,n,o)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.getGlChannels=n.getCoordsDataType=n.getSqueezedParams=n.squeezeInputShape=n.generateShaderFuncNameFromInputSamplerNameAtOutCoords=n.generateShaderFuncNameFromInputSamplerName=n.repeatedTry=n.getPackedShape=void 0;const l=o(2517);n.getPackedShape=function(c){const f=c.length;return c.slice(0,f-1).concat(c[f-1]/4)},n.repeatedTry=async function(c,f=h=>0,a){return new Promise((h,p)=>{let u=0;const s=()=>{if(c())return void h();u++;const t=f(u);a!=null&&u>=a?p():setTimeout(s,t)};s()})},n.generateShaderFuncNameFromInputSamplerName=function(c){return(0,l.assert)(c!==void 0&&c.length!==0,()=>"empty string found for sampler name"),"get"+c.charAt(0).toUpperCase()+c.slice(1)},n.generateShaderFuncNameFromInputSamplerNameAtOutCoords=function(c){return(0,l.assert)(c!==void 0&&c.length!==0,()=>"empty string found for sampler name"),"get"+c.charAt(0).toUpperCase()+c.slice(1)+"AtOutCoords"},n.squeezeInputShape=function(c,f){let a=JSON.parse(JSON.stringify(c));return a=f,a},n.getSqueezedParams=function(c,f){return f.map(a=>c[a]).join(", ")},n.getCoordsDataType=function(c){if(c<=1)return"int";if(c===2)return"ivec2";if(c===3)return"ivec3";if(c===4)return"ivec4";if(c===5)return"ivec5";if(c===6)return"ivec6";throw Error(`GPU for rank ${c} is not yet supported`)},n.getGlChannels=function(c=6){return["x","y","z","w","u","v"].slice(0,c)}},7305:(y,n,o)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.createNewWebGLContext=n.createWebGLContext=void 0;const l=o(6231),c=o(1713),f={};function a(h){const p=function(){if(typeof document>"u"){if(typeof OffscreenCanvas>"u")throw new TypeError("failed to create canvas: OffscreenCanvas is not supported");return new OffscreenCanvas(1,1)}const t=document.createElement("canvas");return t.width=1,t.height=1,t}();let u;const s={alpha:!1,depth:!1,antialias:!1,stencil:!1,preserveDrawingBuffer:!1,premultipliedAlpha:!1,failIfMajorPerformanceCaveat:!1};if((!h||h==="webgl2")&&(u=p.getContext("webgl2",s),u))try{return new c.WebGLContext(u,2)}catch(t){l.Logger.warning("GlContextFactory",`failed to create WebGLContext using contextId 'webgl2'. Error: ${t}`)}if((!h||h==="webgl")&&(u=p.getContext("webgl",s)||p.getContext("experimental-webgl",s),u))try{return new c.WebGLContext(u,1)}catch(t){l.Logger.warning("GlContextFactory",`failed to create WebGLContext using contextId 'webgl' or 'experimental-webgl'. Error: ${t}`)}throw new Error("WebGL is not supported")}n.createWebGLContext=function h(p){let u;p&&p!=="webgl2"||!("webgl2"in f)?p&&p!=="webgl"||!("webgl"in f)||(u=f.webgl):u=f.webgl2,u=u||a(p),p=p||u.version===1?"webgl":"webgl2";const s=u.gl;return f[p]=u,s.isContextLost()?(delete f[p],h(p)):(s.disable(s.DEPTH_TEST),s.disable(s.STENCIL_TEST),s.disable(s.BLEND),s.disable(s.DITHER),s.disable(s.POLYGON_OFFSET_FILL),s.disable(s.SAMPLE_COVERAGE),s.enable(s.SCISSOR_TEST),s.enable(s.CULL_FACE),s.cullFace(s.BACK),u)},n.createNewWebGLContext=a},1713:function(y,n,o){var l=this&&this.__createBinding||(Object.create?function(s,t,e,r){r===void 0&&(r=e);var i=Object.getOwnPropertyDescriptor(t,e);i&&!("get"in i?!t.__esModule:i.writable||i.configurable)||(i={enumerable:!0,get:function(){return t[e]}}),Object.defineProperty(s,r,i)}:function(s,t,e,r){r===void 0&&(r=e),s[r]=t[e]}),c=this&&this.__setModuleDefault||(Object.create?function(s,t){Object.defineProperty(s,"default",{enumerable:!0,value:t})}:function(s,t){s.default=t}),f=this&&this.__importStar||function(s){if(s&&s.__esModule)return s;var t={};if(s!=null)for(var e in s)e!=="default"&&Object.prototype.hasOwnProperty.call(s,e)&&l(t,s,e);return c(t,s),t};Object.defineProperty(n,"__esModule",{value:!0}),n.WebGLContext=n.linearSearchLastTrue=void 0;const a=o(1670),h=f(o(7769)),p=o(9390);function u(s){let t=0;for(;tthis.isTimerResultAvailable(s)),this.getTimerResult(s)}async createAndWaitForFence(){const s=this.createFence(this.gl);return this.pollFence(s)}createFence(s){let t;const e=s,r=e.fenceSync(e.SYNC_GPU_COMMANDS_COMPLETE,0);return s.flush(),t=r===null?()=>!0:()=>{const i=e.clientWaitSync(r,0,0);return i===e.ALREADY_SIGNALED||i===e.CONDITION_SATISFIED},{query:r,isFencePassed:t}}async pollFence(s){return new Promise(t=>{this.addItemToPoll(()=>s.isFencePassed(),()=>t())})}pollItems(){const s=u(this.itemsToPoll.map(t=>t.isDoneFn));for(let t=0;t<=s;++t){const{resolveFn:e}=this.itemsToPoll[t];e()}this.itemsToPoll=this.itemsToPoll.slice(s+1)}async addItemToPoll(s,t){this.itemsToPoll.push({isDoneFn:s,resolveFn:t}),this.itemsToPoll.length>1||await(0,p.repeatedTry)(()=>(this.pollItems(),this.itemsToPoll.length===0))}}},1036:(y,n,o)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.ExecutionPlan=void 0;const l=o(6231);class c{constructor(a,h){this.op=a,this.node=h}}n.ExecutionPlan=class{constructor(f,a,h){this.graph=f,this.profiler=h,this.initialize(a)}initialize(f){this.profiler.event("session","ExecutionPlan.initialize",()=>{const a=this.graph.getNodes();if(a.length!==f.length)throw new Error("The size of nodes and OPs do not match.");this._ops=f.map((h,p)=>new c(h,a[p])),this.reset(),this._starter=[],this._ops.forEach((h,p)=>{let u=!0;for(const s of h.node.inputs)if(!this._values[s]&&this.graph.getInputIndices().indexOf(s)===-1){u=!1;break}u&&this._starter.push(p)})})}reset(){this._values=this.graph.getValues().map(f=>f.tensor)}async execute(f,a){return this.profiler.event("session","ExecutionPlan.execute",async()=>{this.reset();const h=f.createInferenceHandler(),p=this.graph.getInputIndices();if(a.length!==p.length)throw new Error(`number of input tensors don't match the number of inputs to the model: actual: ${a.length} expected: ${p.length}`);a.forEach((i,d)=>{const g=p[d];this._values[g]=i});const u=this._starter.slice(0),s=this.graph.getValues(),t=this.graph.getNodes();let e=0;for(;ethis._values[v]);if(g.indexOf(void 0)!==-1)throw new Error(`unresolved input detected: op: ${d.node}`);const m=g;l.Logger.verbose("ExecPlan",`Runing op:${d.node.name} (${m.map((v,w)=>`'${d.node.inputs[w]}': ${v.type}[${v.dims.join(",")}]`).join(", ")})`);const b=await this.profiler.event("node",d.node.name,async()=>d.op.impl(h,m,d.op.context));if(b.length!==d.node.outputs.length)throw new Error("the size of output does not match model definition.");b.forEach((v,w)=>{const S=d.node.outputs[w];if(this._values[S])throw new Error(`output [${S}] already has value: op:${d.node.name}`);this._values[S]=v});const _=new Set;b.forEach((v,w)=>{const S=d.node.outputs[w];for(const A of s[S].to){const O=t[A];let x=!0;for(const I of O.inputs)if(!this._values[I]){x=!1;break}x&&_.add(A)}}),u.push(..._)}const r=[];for(let i=0;i{Object.defineProperty(n,"__esModule",{value:!0}),n.Graph=void 0;const l=o(1446),c=o(7778),f=o(9395),a=o(9162),h=o(2517);var p=f.onnxruntime.experimental.fbs;n.Graph={from:(e,r)=>new t(e,r)};class u{constructor(r){this._from=void 0,this._to=[],this.tensor=void 0,this.type=void 0,r&&(this.type=h.ProtoUtil.tensorValueTypeFromProto(r.type.tensorType))}get from(){return this._from}get to(){return this._to}}class s{constructor(r,i){r instanceof l.onnx.NodeProto?(this.name=r.name,this.opType=r.opType,this.attributes=new c.Attribute(r.attribute)):r instanceof p.Node&&(this.name=i??r.name(),this.opType=r.opType(),this.attributes=new c.Attribute(h.ProtoUtil.tensorAttributesFromORTFormat(r))),this.inputs=[],this.outputs=[],this.executeNode=!0}}class t{constructor(r,i){if(!r)throw new TypeError("graph is empty");this.buildGraph(r),this.transformGraph(i),this.checkIsAcyclic()}getInputIndices(){return this._allInputIndices}getInputNames(){return this._allInputNames}getOutputIndices(){return this._allOutputIndices}getOutputNames(){return this._allOutputNames}getValues(){return this._allData}getNodes(){return this._nodes}buildGraph(r){if(r instanceof l.onnx.GraphProto)this.buildGraphFromOnnxFormat(r);else{if(!(r instanceof p.Graph))throw new TypeError("Graph type is not supported.");this.buildGraphFromOrtFormat(r)}}buildGraphFromOnnxFormat(r){const i=new Map;this._allData=[],this._allInputIndices=[],this._allInputNames=[],this._allOutputIndices=[],this._allOutputNames=[],this._nodes=[];const d=new Map;if(!r.input)throw new Error("missing information in graph: input");const g=[];for(const m of r.input){if(i.has(m.name))throw new Error(`duplicated input name: ${m.name}`);const b=this._allData.push(new u(m))-1;i.set(m.name,b),g.push(m.name)}if(!r.initializer)throw new Error("missing information in graph: initializer");for(const m of r.initializer){let b=i.get(m.name);if(b===void 0){const _=new u;_.type={shape:{dims:h.ProtoUtil.tensorDimsFromProto(m.dims)},tensorType:h.ProtoUtil.tensorDataTypeFromProto(m.dataType)},b=this._allData.push(_)-1,i.set(m.name,b)}this._allData[b]._from=-1,this._allData[b].tensor=a.Tensor.fromProto(m)}for(let m=0;m{this._allData[g]._to.forEach(m=>{r.add(m)})});const i=Array.from(r),d=new Array(this._nodes.length).fill("white");for(;i.length>0;){const g=i.pop();d[g]==="gray"?d[g]="black":(i.push(g),d[g]="gray",this._nodes[g].outputs.forEach(m=>{const b=this._allData[m];if(b.tensor!==void 0)throw new Error("node outputs should not be initialized");if(b._from!==g)throw new Error("from property of the Value object doesn't match index of Node being processed");b._to.forEach(_=>{if(d[_]==="gray")throw new Error("model graph is cyclic");d[_]==="white"&&i.push(_)})}))}}transformGraph(r){this.removeAllIdentityNodes(),this.removeAllDropoutNodes(),this.fuseConvActivationNodes(),r&&r.transformGraph(this),this.finalizeGraph()}finalizeGraph(){let r=0;for(let i=0;i0&&(this._nodes[i].inputs.forEach(d=>{const g=this._allData[d]._to.indexOf(i+r);g!==-1&&(this._allData[d]._to[g]=i)}),this._nodes[i].outputs.forEach(d=>{this._allData[d]._from&&this._allData[d]._from===i+r&&(this._allData[d]._from=i)})):(r++,this._nodes[i].outputs.forEach(d=>{this._allData[d]._from=-2}),this._nodes.splice(i,1),i--);r=0;for(let i=0;i0){let d=-1;this._allData[i].from!==void 0&&this._allData[i].from!==-1?(d=this._nodes[this._allData[i].from].outputs.indexOf(i+r),d!==-1&&(this._nodes[this._allData[i].from].outputs[d]=i)):(d=this._allInputIndices.indexOf(i+r),d!==-1&&(this._allInputIndices[d]=i)),this._allData[i].to.forEach(g=>{d=this._nodes[g].inputs.indexOf(i+r),d!==-1&&(this._nodes[g].inputs[d]=i)}),this._allData[i].to.length===0&&(d=this._allOutputIndices.indexOf(i+r),d!==-1&&(this._allOutputIndices[d]=i))}}else r++,this._allData.splice(i,1),i--}deleteNode(r){const i=this._nodes[r];if(i.outputs.length>1){for(let v=1;v0)throw new Error("Node deletion with more than one output connected to other nodes is not supported. ")}i.executeNode=!1;const d=i.inputs[0],g=i.outputs[0],m=this._allData[g].to,b=this._allData[d].to.indexOf(r);if(b===-1)throw new Error("The Value object doesn't have the current Node in it's 'to' property ");this._allData[d].to.splice(b,1),this._allData[g]._to=[];const _=this._allOutputIndices.indexOf(g);if(_!==-1&&(this._allOutputIndices[_]=d),m&&m.length>0)for(const v of m){const w=this._nodes[v].inputs.indexOf(g);if(w===-1)throw new Error("The Node object doesn't have the output Value in it's 'inputs' property ");this._nodes[v].inputs[w]=d,this._allData[d].to.push(v)}}removeAllDropoutNodes(){let r=0;for(const i of this._nodes){if(i.opType==="Dropout"){if(i.inputs.length!==1)throw new Error("Dropout nodes should only contain one input. ");if(i.outputs.length!==1&&i.outputs.length!==2)throw new Error("Dropout nodes should contain either 1 or 2 output(s)");if(i.outputs.length===2&&this._allData[i.outputs[1]]._to.length!==0)throw new Error("Dropout nodes's second output should not be referenced by other nodes");this.deleteNode(r)}r++}}removeAllIdentityNodes(){let r=0;for(const i of this._nodes)i.opType==="Identity"&&this.deleteNode(r),r++}isActivation(r){switch(r.opType){case"Relu":case"Sigmoid":case"Clip":return!0;default:return!1}}fuseConvActivationNodes(){for(const r of this._nodes)if(r.opType==="Conv"){const i=this._allData[r.outputs[0]]._to;if(i.length===1&&this.isActivation(this._nodes[i[0]])){const d=this._nodes[i[0]];if(d.opType==="Clip")if(d.inputs.length===1)try{r.attributes.set("activation_params","floats",[d.attributes.getFloat("min"),d.attributes.getFloat("max")])}catch{r.attributes.set("activation_params","floats",[h.MIN_CLIP,h.MAX_CLIP])}else{if(!(d.inputs.length>=3&&this._allData[d.inputs[1]].tensor!==void 0&&this._allData[d.inputs[2]].tensor!==void 0))continue;r.attributes.set("activation_params","floats",[this._allData[d.inputs[1]].tensor.floatData[0],this._allData[d.inputs[2]].tensor.floatData[0]])}r.attributes.set("activation","string",d.opType),this.deleteNode(i[0])}}}}},6231:(y,n)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.now=n.Profiler=n.Logger=void 0;const o={verbose:1e3,info:2e3,warning:4e3,error:5e3,fatal:6e3},l={none:new class{log(s,t,e){}},console:new class{log(s,t,e){console.log(`${this.color(s)} ${e?"\x1B[35m"+e+"\x1B[0m ":""}${t}`)}color(s){switch(s){case"verbose":return"\x1B[34;40mv\x1B[0m";case"info":return"\x1B[32mi\x1B[0m";case"warning":return"\x1B[30;43mw\x1B[0m";case"error":return"\x1B[31;40me\x1B[0m";case"fatal":return"\x1B[101mf\x1B[0m";default:throw new Error(`unsupported severity: ${s}`)}}}},c={provider:"console",minimalSeverity:"warning",logDateTime:!0,logSourceLocation:!1};let f={"":c};function a(s,t,e,r){if(t===void 0)return i=s,{verbose:a.verbose.bind(null,i),info:a.info.bind(null,i),warning:a.warning.bind(null,i),error:a.error.bind(null,i),fatal:a.fatal.bind(null,i)};if(e===void 0)h(s,t);else if(typeof e=="number"&&r===void 0)h(s,t);else if(typeof e=="string"&&r===void 0)h(s,e,0,t);else{if(typeof e!="string"||typeof r!="number")throw new TypeError("input is valid");h(s,e,0,t)}var i}function h(s,t,e,r){const i=f[r||""]||f[""];o[s]