diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Guitar Rig 6 Full Version [VERIFIED].md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Guitar Rig 6 Full Version [VERIFIED].md deleted file mode 100644 index f5322e6892dbd055212c41bd00ca5fe0eeeca9f6..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Guitar Rig 6 Full Version [VERIFIED].md +++ /dev/null @@ -1,168 +0,0 @@ -
-

How to Download Guitar Rig 6 Full Version

-

If you are looking for a way to create realistic and professional guitar tones on your computer, you might have heard of Guitar Rig 6, the latest version of the popular amp simulator and multi-effects rack from Native Instruments. But how can you download Guitar Rig 6 full version for free or at a discounted price? And how can you install and use it to get the most out of your guitar playing and recording?

-

download guitar rig 6 full version


DOWNLOAD >> https://byltly.com/2uKvLD



-

In this article, we will answer all these questions and more. We will explain what Guitar Rig 6 is and why you need it, what features and benefits it offers, what system requirements and compatibility it has, how to download it for free or at a low cost, how to install and activate it on your computer, and how to use it to create amazing guitar tones. By the end of this article, you will have everything you need to know about downloading Guitar Rig 6 full version and using it to enhance your guitar sound.

-

What is Guitar Rig 6 and why you need it

-

Guitar Rig 6 is a software program that simulates the sound of various guitar amps, cabinets, pedals, effects, and tools. It allows you to plug your guitar into your computer and process your signal with a wide range of components that emulate real hardware devices. You can also use it as a standalone application or as a plugin in your digital audio workstation (DAW).

-

Guitar Rig 6 is designed for guitarists of all levels and styles, from beginners to professionals, from rock to metal, from blues to jazz. Whether you want to practice, record, perform, or experiment with different sounds, Guitar Rig 6 can help you achieve your goals. You can use it to create realistic and authentic tones that match your favorite artists and genres, or you can use it to craft your own unique sounds that express your personality and creativity.

-

Guitar Rig 6 features and benefits

-

Guitar Rig 6 comes with a host of features and benefits that make it one of the best guitar effects software on the market. Here are some of them:

-

- -

Guitar Rig 6 system requirements and compatibility

-

Guitar Rig 6 is compatible with Windows and Mac operating systems. Here are the minimum system requirements for running Guitar Rig 6:

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

How to download Guitar Rig 6 full version for free

-

Now that you know what Guitar Rig 6 is and what it can do for you, you might be wondering how to download it for free or at a low cost. There are three ways to get Guitar Rig 6 full version for free or at a discounted price:

-

Guitar Rig 6 Player: the free version with limited features

-

The first way to get Guitar Rig 6 full version for free is to download Guitar Rig 6 Player, the free version of Guitar Rig 6 that comes with limited features. Guitar Rig 6 Player is a great way to try out Guitar Rig 6 and see if you like it before buying the full version. Guitar Rig 6 Player includes:

- -

To download Guitar Rig 6 Player for free, you need to create a free Native Instruments account and download the Native Access app. Native Access is a software that manages the installation and activation of Native Instruments products. Once you have Native Access installed, you can download Guitar Rig 6 Player from the Not Installed tab and install it on your computer.

-

Guitar Rig 6 Demo: the trial version with full features

-

The second way to get Guitar Rig 6 full version for free is to download Guitar Rig 6 Demo, the trial version of Guitar Rig 6 that comes with full features. Guitar Rig 6 Demo is a great way to test all the features and functions of Guitar Rig 6 and see if it meets your needs and expectations before buying the full version. Guitar Rig 6 Demo includes:

- -

To download Guitar Rig 6 Demo for free, you need to create a free Native Instruments account and download the Native Access app. Once you have Native Access installed, you can download Guitar Rig 6 Demo from the Not Installed tab and install it on your computer.

-

Guitar Rig 6 Pro: the paid version with all features

-

The third way to get Guitar Rig 6 full version is to buy Guitar Rig 6 Pro, the paid version of Guitar Rig 6 that comes with all features. Guitar Rig 6 Pro is the ultimate guitar effects software that gives you unlimited creative possibilities and professional results. Guitar Rig 6 Pro includes:

- -

To buy Guitar Rig 6 Pro, you need to create a free Native Instruments account and download the Native Access app. Once you have Native Access installed, you can buy Guitar Rig 6 Pro from the Shop tab and install it on your computer. The price of Guitar Rig 6 Pro is $199 USD. However, there are some ways to get it at a discounted price:

-

How to install and activate Guitar Rig 6 full version

-

Once you have downloaded Guitar Rig 6 full version, either for free or for a price, you need to install and activate it on your computer. Here are the steps to do so:

-

How to install Guitar Rig 6 on your computer

-

To install Guitar Rig 6 on your computer, you need to use the Native Access app that you downloaded earlier. Here are the steps to install Guitar Rig 6 with Native Access:

-
    -
  1. Open Native Access and log in with your Native Instruments account.
  2. -
  3. Go to the Installed Products tab and find Guitar Rig 6 in the list.
  4. -
  5. Click on the Install button and choose a location for the installation.
  6. -
  7. Wait for the installation to complete and click on the Finish button.
  8. -
  9. Guitar Rig 6 is now installed on your computer and ready to use.
  10. -
-

How to activate Guitar Rig 6 with your license key or Native Access account

-

To activate Guitar Rig 6 on your computer, you need to use either your license key or your Native Access account. Here are the steps to activate Guitar Rig 6 with either method:

- -

How to use Guitar Rig 6 full version to create amazing guitar tones

-

Now that you have installed and activated Guitar Rig 6 full version on your computer, you can start using it to create amazing guitar tones. Here are some tips and tricks on how to use Guitar Rig 6 full version effectively and efficiently:

-

How to navigate the Guitar Rig 6 interface and browser

-

Guitar Rig 6 has a user-friendly interface that consists of four main sections: the header, the browser, the rack, and the footer. Here is a brief overview of each section:

- -

To navigate the Guitar Rig 6 interface and browser, you can use your mouse, keyboard, or MIDI device. You can also use shortcuts and commands to access various functions and tools more quickly. For example, you can use the arrow keys to navigate the preset list, the style list, and the component list. You can also use the spacebar to bypass or enable a component, or use the delete key to remove a component from the rack. You can also use commands such as Ctrl+C to copy a component, Ctrl+V to paste a component, Ctrl+Z to undo an action, etc.

-

How to load and customize presets and components

-

Guitar Rig 6 comes with over 300 presets that are ready to use or tweak to your liking. You can also create your own presets and save them for later use. Here are some tips on how to load and customize presets and components:

- -

How to use the new amps and effects powered by Intelligent Circuit Modeling

-

Guitar Rig 6 introduces a new technology called Intelligent Circuit Modeling that uses artificial intelligence to analyze and recreate the behavior of real analog circuits. This results in more realistic and responsive sounds that capture the nuances and character of the original hardware. Guitar Rig 6 features three new amps and 16 new effects based on this technology, such as the Chicago, Bass Invader, Fire Breather, Harmonic Synthesizer, Grain Delay, Choral Reef, etc. Here are some tips on how to use the new amps and effects powered by Intelligent Circuit Modeling:

- -

Conclusion and FAQs

-

Guitar Rig 6 is a powerful and versatile guitar effects software that can help you create realistic and professional guitar tones on your computer. It offers a wide range of features and benefits that make it one of the best guitar effects software on the market. It also comes with three ways to get Guitar Rig 6 full version for free or at a discounted price: Guitar Rig 6 Player, Guitar Rig 6 Demo, and Guitar Rig 6 Pro.

-

In this article, we have explained what Guitar Rig 6 is and why you need it, what features and benefits it offers, what system requirements and compatibility it has, how to download it for free or at a low cost, how to install and activate it on your computer, and how to use it to create amazing guitar tones. We hope that this article has helped you learn everything you need to know about downloading Guitar Rig 6 full version and using it to enhance your guitar sound.

-

If you have any questions or doubts about Guitar Rig 6 full version, here are some frequently asked questions (FAQs) that might help you:

-

Q: Can I use Guitar Rig 6 with any guitar?

-

A: Yes, you can use Guitar Rig 6 with any electric guitar, acoustic guitar, bass guitar, or any other instrument that has a pickup or a microphone. You just need to connect your instrument to your computer via an audio interface with an instrument input.

-

Q: Can I use Guitar Rig 6 with any DAW?

-

A: Yes, you can use Guitar Rig 6 with any DAW that supports VST, AU, or AAX plugin formats. You just need to load Guitar Rig 6 as an effect plugin in your DAW's track or bus.

-

Q: Can I use Guitar Rig 6 offline?

-

A: Yes, you can use Guitar Rig 6 offline as a standalone application without an internet connection. However, you need an internet connection to download, install, and activate Guitar Rig 6 for the first time. You also need an internet connection to access the online features and updates of Guitar Rig 6.

-

Q: Can I use Guitar Rig 6 with other guitar effects software or hardware?

-

A: Yes, you can use Guitar Rig 6 with other guitar effects software or hardware, as long as they are compatible and do not cause any conflicts or issues. You can use Guitar Rig 6 as an effect plugin in your DAW and combine it with other plugins, or you can use Guitar Rig 6 as a standalone application and route it to other software or hardware via an audio interface or a virtual cable.

-

Q: Can I share my Guitar Rig 6 presets and sounds with others?

-

A: Yes, you can share your Guitar Rig 6 presets and sounds with others, as long as you respect the intellectual property rights of Native Instruments and the original creators of the components and presets. You can export your presets and sounds as files and send them to others via email, social media, cloud storage, etc. You can also import presets and sounds from others and load them to your Guitar Rig 6.

b2dd77e56b
-
-
\ No newline at end of file diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/893u2is User Manual.md b/spaces/1gistliPinn/ChatGPT4/Examples/893u2is User Manual.md deleted file mode 100644 index 6ae7c610539ca7890ad06fefb4e352a85cefe688..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/893u2is User Manual.md +++ /dev/null @@ -1,6 +0,0 @@ -

893u2is User Manual


Download ===== https://imgfil.com/2uy100



-
-Oct 8, 2015 Only after I read the instructions carefully did I see the ... Station Users Guide Multi-Function Hdd Docking Manual 893u2is ... 4d29de3e1b
-
-
-

diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Age Of Empires 3 No Cd Crack Gamecopyworld Gtahttps Scoutmails.com Index301.php K Age Of Empires 3 WORK.md b/spaces/1gistliPinn/ChatGPT4/Examples/Age Of Empires 3 No Cd Crack Gamecopyworld Gtahttps Scoutmails.com Index301.php K Age Of Empires 3 WORK.md deleted file mode 100644 index 401d7e7c49993db068414afc4a0f4bfacc8b204a..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Age Of Empires 3 No Cd Crack Gamecopyworld Gtahttps Scoutmails.com Index301.php K Age Of Empires 3 WORK.md +++ /dev/null @@ -1,129 +0,0 @@ - -

Age of Empires 3 No CD Crack GameCopyWorld: How to Play the Classic Strategy Game Without a Disc

- -

Age of Empires 3 is one of the most popular and acclaimed strategy games of all time, but it also requires a CD to play. If you have lost your CD, or you want to play the game on a different computer without carrying the disc around, you might be looking for a way to play Age of Empires 3 no CD crack GameCopyWorld.

- -

GameCopyWorld is a website that provides game fixes, trainers, cheats, and patches for various PC games. One of the game fixes they offer is a no CD crack for Age of Empires 3, which allows you to play the game without inserting the CD every time. This can also help you avoid potential damage to your CD or CD drive.

-

age of empires 3 no cd crack gamecopyworld gtahttps: scoutmails.com index301.php k age of empires 3


Download File ✵✵✵ https://imgfil.com/2uxZHS



- -

In this article, we will show you how to download and install Age of Empires 3 no CD crack GameCopyWorld, and how to enjoy the game without any hassle. We will also tell you about some of the features and benefits of playing Age of Empires 3 no CD crack GameCopyWorld.

- -

How to Download and Install Age of Empires 3 No CD Crack GameCopyWorld

- -

To download and install Age of Empires 3 no CD crack GameCopyWorld, you will need to follow these steps:

- -
    -
  1. Go to https://www.gamecopyworld.com/games/pc_age_of_empires_3.shtml and scroll down to find the game fix you need. Depending on which version and expansion of Age of Empires 3 you have, you will need to choose the appropriate no CD crack. For example, if you have Age of Empires 3: Complete Collection, which includes the base game and both expansions (The WarChiefs and The Asian Dynasties), you will need to download Age of Empires III: Complete Collection v1.0 [EN] Fixed Files.
  2. -
  3. Click on the download link and save the file to your computer. You may need to use a program like WinRAR or 7-Zip to extract the file.
  4. -
  5. Locate the folder where you have installed Age of Empires 3 on your computer. It is usually in C:\Program Files (x86)\Microsoft Games\Age of Empires III.
  6. -
  7. Copy the cracked files from the downloaded folder and paste them into the installation folder, replacing the original files. You may need to backup the original files in case you want to restore them later.
  8. -
  9. Run the game as usual. You should be able to play Age of Empires 3 without inserting the CD.
  10. -
- -

Features and Benefits of Playing Age of Empires 3 No CD Crack GameCopyWorld

- -

Playing Age of Empires 3 no CD crack GameCopyWorld has some advantages over playing with the CD. Here are some of them:

- - - -

Conclusion

- -

Age of Empires 3 is a classic strategy game that deserves to be played by anyone who loves history, culture, and warfare. With Age of Empires 3 no CD crack GameCopyWorld, you can play the game without any hassle or limitation. Just follow our guide on how to download and install Age of Empires 3 no CD crack GameCopyWorld, and enjoy the game at its best.

- -

If you liked this article, please share it with your friends who are also fans of Age of Empires 3. And if you have any questions or comments, feel free to leave them below. We would love to hear from you!

-

What is Age of Empires 3 and Why Should You Play It?

- -

Age of Empires 3 is a real-time strategy game that was released in 2005 by Microsoft Studios and Ensemble Studios. It is the third installment in the Age of Empires series, which is one of the most successful and influential strategy game franchises of all time.

- -

Age of Empires 3 takes place during the Age of Discovery, from the 15th to the 19th century. You can choose from eight European civilizations, each with their own unique units, buildings, technologies, and abilities. You can also play as three native American civilizations in the WarChiefs expansion, or as three Asian civilizations in the Asian Dynasties expansion.

- -

Age of Empires 3 offers a rich and varied gameplay experience that will appeal to both casual and hardcore strategy fans. You can explore and colonize new lands, trade and fight with other players or AI opponents, build and manage your economy and military, research new technologies and upgrades, and customize your home city that provides you with bonuses and shipments.

-

- -

Age of Empires 3 also features a compelling campaign mode that follows the story of three generations of the Black family, as they participate in historical events such as the Seven Years' War, the American Revolution, and the Napoleonic Wars. The campaign mode has cinematic cutscenes, voice acting, and scripted scenarios that will immerse you in the history and culture of the era.

- -

Age of Empires 3 is a classic strategy game that deserves to be played by anyone who loves history, culture, and warfare. It has stunning graphics, sound effects, and music that bring the game world to life. It has a large and active online community that supports the game with mods, maps, tournaments, and more. It has a high replay value, as you can try different strategies, civilizations, game modes, and difficulty levels.

- -

How to Play Age of Empires 3 No CD Crack GameCopyWorld Online

- -

One of the best features of Age of Empires 3 is its online multiplayer mode, where you can challenge other players from around the world in various game modes such as supremacy, deathmatch, treaty, king of the hill, and more. You can also join or create clans, chat with other players, check your stats and rankings, and earn medals and achievements.

- -

However, to play Age of Empires 3 online, you need to have a valid CD key that is registered on your Microsoft account. If you have lost your CD key, or you have downloaded Age of Empires 3 no CD crack GameCopyWorld from our website, you might not be able to access the official online servers.

- -

But don't worry, there is a way to play Age of Empires 3 no CD crack GameCopyWorld online without a CD key. All you need to do is download and install a third-party client called ESOCommunity Patch. This patch will allow you to play Age of Empires 3 no CD crack GameCopyWorld online on ESOCommunity servers, which are unofficial but popular servers that host thousands of players every day.

- -

To download and install ESOCommunity Patch for Age of Empires 3 no CD crack GameCopyWorld, you will need to follow these steps:

- -
    -
  1. Go to https://eso-community.net/download-patch and click on the download button.
  2. -
  3. Run the installer and follow the instructions. Make sure you select your Age of Empires 3 installation folder when prompted.
  4. -
  5. Launch Age of Empires 3 no CD crack GameCopyWorld from your desktop shortcut or start menu.
  6. -
  7. Create a new ESO account or log in with your existing one. You don't need a CD key to create an account.
  8. -
  9. Enjoy playing Age of Empires 3 no CD crack GameCopyWorld online on ESOCommunity servers!
  10. -
- -

Conclusion

- -

Age of Empires 3 no CD crack GameCopyWorld is a great way to play the classic strategy game without a disc. You can download and install it easily from our website, and enjoy all the features and content of the game without any hassle. You can also play it online on ESOCommunity servers with other players who have downloaded Age of Empires 3 no CD crack GameCopyWorld.

- -

If you liked this article, please share it with your friends who are also fans of Age of Empires 3. And if you have any questions or comments, feel free to leave them below. We would love to hear from you!

-

How to Master the Combat System in Age of Empires 3

- -

Age of Empires 3 is not just about building and managing your economy; it is also about fighting and conquering your enemies. The combat system in Age of Empires 3 is based on a rock-paper-scissors model, where each unit type has strengths and weaknesses against other unit types. For example, infantry units are good against cavalry units, cavalry units are good against artillery units, and artillery units are good against infantry units.

- -

To master the combat system in Age of Empires 3, you need to know the different unit types and their counters, as well as how to use formations, stances, and special abilities. You also need to pay attention to the terrain, the weather, and the line of sight, as they can affect the performance and visibility of your units.

- -

Here are some general tips and tricks for combat in Age of Empires 3:

- - - -

How to Enjoy the Campaign Mode in Age of Empires 3

- -

If you are looking for a more story-driven and cinematic experience in Age of Empires 3, you might want to try the campaign mode. The campaign mode consists of three acts that follow the adventures of the Black family through different historical periods and continents.

- -

The first act is called Blood, Ice, and Steel, and it takes place during the colonization of America in the 16th and 17th centuries. You will play as Morgan Black, a knight of Malta who fights against the Spanish conquistadors and their allies.

- -

The second act is called Fire and Shadow, and it takes place during the American Revolution in the 18th century. You will play as John Black, a mercenary who joins the Continental Army and battles against the British Empire.

- -

The third act is called Steel and Thunder, and it takes place during the Napoleonic Wars in the 19th century. You will play as Amelia Black, a railroad tycoon who travels across Europe and Asia in search of her lost family legacy.

- -

The campaign mode in Age of Empires 3 offers a rich and varied gameplay experience that will appeal to both casual and hardcore strategy fans. You will explore and colonize new lands, trade and fight with other factions, build and manage your economy and military, research new technologies and upgrades, and customize your home city that provides you with bonuses and shipments.

- -

The campaign mode also features cinematic cutscenes, voice acting, and scripted scenarios that will immerse you in the history and culture of the era. You will meet historical figures like George Washington , Napoleon Bonaparte , Simon Bolivar , Queen Isabella , Tokugawa Ieyasu , Akbar , Ivan the Terrible , Elizabeth I , Samuel de Champlain , Tecumseh , Nathaniel Black , Sahin \"The Falcon\" , Kanyenke , Lizzie \"The Pirate\" , Alain Magnan , Warwick \"The Redcoat\" , Pierre Beaumont , Stuart Black , Nonahkee , Sven Kuechler , Huang He , Admiral Jinhai , Nanib Sahir , Rani Pravarthi , Colonel Edwardson , Chayton Black , Holme \"The Boneguard\" , Crazy Horse , Chief Brave Wolf , General Custer , Major Cooper , Kichiro , Daimyo Mototada Torii , Daimyo Junkei Kuroda , Daimyo Shingen Takeda , Daimyo Kenshin Uesugi , Daimyo Nobunaga Oda , Daimyo Hideyoshi Toyotomi , Daimyo Ieyasu Tokugawa .

- -

If you want to enjoy the campaign mode in Age of Empires 3, here are some tips and tricks:

- - -

Conclusion

- -

Age of Empires 3 no CD crack GameCopyWorld is a great way to play the classic strategy game without a disc. You can download and install it easily from our website, and enjoy all the features and content of the game without any hassle. You can also play it online on ESOCommunity servers with other players who have downloaded Age of Empires 3 no CD crack GameCopyWorld.

- -

In this article, we have shown you how to download and install Age of Empires 3 no CD crack GameCopyWorld, how to master the combat system in Age of Empires 3, and how to enjoy the campaign mode in Age of Empires 3. We hope you have found this article helpful and informative, and that you have learned some useful tips and tricks to boost your game.

- -

If you liked this article, please share it with your friends who are also fans of Age of Empires 3. And if you have any questions or comments, feel free to leave them below. We would love to hear from you!

3cee63e6c2
-
-
\ No newline at end of file diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Download Automation Studio 5.6 Crack Freel !!TOP!!.md b/spaces/1gistliPinn/ChatGPT4/Examples/Download Automation Studio 5.6 Crack Freel !!TOP!!.md deleted file mode 100644 index c4e627e6eb71b8bd5a9f4765bcc796b855373b33..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Download Automation Studio 5.6 Crack Freel !!TOP!!.md +++ /dev/null @@ -1,24 +0,0 @@ -

Download Automation Studio 5.6 Crack Freel


DOWNLOAD --->>> https://imgfil.com/2uy1GW



-
-Winsound.com Automation Studio; Read the Manuals and FAQs in the Digital Audio Forum; Learn More About Old Stock Author: Dan, JVC Author: Jack Szabo from Jack's JVC Revamp; Jack's JVC Revamp 5,…Category: Audio - Digital Audio - Components & Equipment Other Related Categories AudioSoftwareTuning & MeasurementsAudioCables & DevicesToolsMagazines & JournalsMembers ClubsOther Educational Sites Review Top Posts Analyze Audio at What Hi, I’m Dan. With a knowledge of some 35 years of audio, I have been writing about the companies, products, and technologies in this business since 1999. I am an Authorized JVC Dealer, and the Audio & Network Assistant Editor here at Home Theater Forum. View my complete profile - -Repair Shop Studios now offers a series of licensing programs that can enable you to generate a royalty stream for your independently developed projects, including the JVC AiS Software Suite, the JVC AiS Suite, and the JVC AiS Suite Plus. - -Thanks for the info! - -I can't find the manuals for this one either. Will just have to use the information above in this thread I guess. On the CD there are 2 files for the CD Writer, a program for the CD writer and another for the CD Writer Service. - -I have the new version 1.02 and have used the CD Writer 1.02 with software version AOS22 which says the disc I used was OSD 2.6 version. I have also used the CD Writer version 1.02 with software version BOS21 with no OSD disc. The CD Writer version 1.02 with AOS22 will not write on my ATR Vista. - -I did a google search and found this in an earlier post but can't find the post right now - -You are using CD writer 1.02 with AOS22, which is compatible with Vista x64. Your software version is not compatible. XP works fine as you are using the XP version of the program. - -Use a CD Writer version 1.2 software. - -You will need to look in your Cd writing software. I know it's not simple but you will find the version 2.6 in there. I had a similar problem with some software I bought and it took a little investigation to determine that it wasn't the CD writer software. - -I have the new version 1.02 and have used the CD Writer 1. 4fefd39f24
-
-
-

diff --git a/spaces/1line/AutoGPT/autogpt/processing/__init__.py b/spaces/1line/AutoGPT/autogpt/processing/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Adobe Acrobat Reader X The Power of PDF Productivity.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Adobe Acrobat Reader X The Power of PDF Productivity.md deleted file mode 100644 index 9f50b1a9eafd0c69846e3fa085a032c7ee3bfbf1..0000000000000000000000000000000000000000 --- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Adobe Acrobat Reader X The Power of PDF Productivity.md +++ /dev/null @@ -1,167 +0,0 @@ - -

How to Download Adobe Reader X

-

Adobe Reader X is a free software that allows you to view, print, and comment on PDF files. PDF stands for Portable Document Format, a file format that preserves the layout, fonts, images, and hyperlinks of any document. PDF files are widely used for sharing information across different platforms and devices.

-

how to download adobe reader x


Downloadhttps://urlin.us/2uSWMb



-

If you want to access PDF files on your computer or mobile device, you need Adobe Reader X. With this software, you can not only open and view PDFs, but also fill out forms, sign documents, add annotations, and more. Adobe Reader X also offers some advanced features, such as converting PDFs to other file formats, password protecting PDFs, comparing PDFs, and integrating with cloud storage services.

-

In this article, we will show you how to download Adobe Reader X for Windows and Mac, as well as how to troubleshoot some common installation issues. Follow the steps below and enjoy the benefits of Adobe Reader X.

-

How to Download Adobe Reader X for Windows

-

If you are using a Windows computer, here are the steps to download and install Adobe Reader X:

-
    -
  1. Check your system requirements. Before you download Adobe Reader X, make sure that your computer meets the minimum system requirements. You can find them on this page. You will need a Windows operating system (Windows Server or Windows XP/Vista/7/8/10), an Intel or AMD processor, at least 256 MB of RAM, at least 260 MB of hard disk space, a screen resolution of at least 1024 x 576 pixels, and an Internet browser (Internet Explorer or Firefox).
  2. -
  3. Go to the official Adobe website. Open your Internet browser and go to this page. This is where you can download Acrobat Reader for free.
  4. -
  5. Choose your language and version. On the download page, you will see a drop-down menu where you can select your language. You can also choose whether you want to download Acrobat Reader for Windows (32-bit or 64-bit) or Mac OS. Make sure you select the correct version for your system.
  6. -
  7. Click the Download button. After choosing your language and version, click the yellow Download button. You will see a pop-up window asking you to save the file. Choose a location on your computer where you want to save the file and click Save.
  8. -
  9. Run the installer and follow the instructions. Once the download is complete, locate the file on your computer and double-click it to run the installer. You will see a welcome screen where you can choose whether you want to install Acrobat Reader as a default PDF viewer or not. Click Next and follow the on-screen instructions to complete the installation. You may need to restart your computer to finish the installation.
  10. -
-

Congratulations, you have successfully downloaded and installed Adobe Reader X for Windows. You can now open and view any PDF file on your computer with this software.

-

How to download adobe reader x for windows 10
-How to download adobe reader x offline installer
-How to download adobe reader x free version
-How to download adobe reader x for mac os
-How to download adobe reader x update
-How to download adobe reader x pro
-How to download adobe reader x for android
-How to download adobe reader x msi
-How to download adobe reader x 10.1.16
-How to download adobe reader x for chromebook
-How to download adobe reader x for linux
-How to download adobe reader x 64 bit
-How to download adobe reader x portable
-How to download adobe reader x full setup
-How to download adobe reader x without internet
-How to download adobe reader x for windows 7
-How to download adobe reader x for ipad
-How to download adobe reader x 10.0.0
-How to download adobe reader x from official website
-How to download adobe reader x with crack
-How to download adobe reader x for windows 8.1
-How to download adobe reader x for iphone
-How to download adobe reader x 10.1.4
-How to download adobe reader x in hindi
-How to download adobe reader x for windows xp
-How to download adobe reader x for kindle fire
-How to download adobe reader x 10.1.1
-How to download adobe reader x in tamil
-How to download adobe reader x for windows vista
-How to download adobe reader x for pc
-How to download adobe reader x 10.0.1
-How to download adobe reader x in urdu
-How to download adobe reader x for macbook air
-How to download adobe reader x for laptop
-How to download adobe reader x 10.1.3
-How to download adobe reader x in telugu
-How to download adobe reader x for macbook pro
-How to download adobe reader x for desktop
-How to download adobe reader x 10.0.2
-How to download adobe reader x in malayalam
-How to download adobe reader x for mac os catalina
-How to download adobe reader x for tablet
-How to download adobe reader x 10.1.2
-How to download adobe reader x in kannada
-How to download adobe reader x for mac os mojave
-How to download adobe reader x for chrome os
-How to download adobe reader x 10.0.3
-How to download adobe reader x in gujarati

-

How to Download Adobe Reader X for Mac

-

If you are using a Mac computer, here are the steps to download and install Adobe Reader X:

-
    -
  1. Check your system requirements. Before you download Adobe Reader X, make sure that your computer meets the minimum system requirements. You can find them on this page. You will need a Mac OS X operating system (version 10.5.8 or later), an Intel processor, at least 512 MB of RAM, at least 415 MB of hard disk space, a screen resolution of at least 1024 x 768 pixels, and an Internet browser (Safari or Firefox).
  2. -
  3. Go to the official Adobe website. Open your Internet browser and go to this page. This is where you can download Acrobat Reader for free.
  4. -
  5. Choose your language and version. On the download page, you will see a drop-down menu where you can select your language. You can also choose whether you want to download Acrobat Reader for Windows (32-bit or 64-bit) or Mac OS. Make sure you select the correct version for your system.
  6. -
  7. Click the Download button. After choosing your language and version, click the yellow Download button. You will see a pop-up window asking you to save the file. Choose a location on your computer where you want to save the file and click Save.
  8. -
  9. Open the DMG file and drag the icon to the Applications folder. Once the download is complete, locate the file on your computer and double-click it to open it. You will see a window with an icon of Adobe Reader X and a shortcut to the Applications folder. Drag the icon of Adobe Reader X to the Applications folder and drop it there. This will copy the software to your computer.
  10. -
-

Congratulations, you have successfully downloaded and installed Adobe Reader X for Mac. You can now open and view any PDF file on your computer with this software.

-

How to Troubleshoot Adobe Reader X Installation Issues

-

Sometimes, you may encounter some issues when installing or using Adobe Reader X. Here are some common issues and solutions that may help you fix them:

-

Reinstall Adobe Reader X

-

If Adobe Reader X does not work properly or crashes frequently, you may need to reinstall it. To do this, follow these steps:

- -

This should fix any corrupted or missing files that may cause problems with Adobe Reader X.

-

Disable Protected Mode at Startup

-

If Adobe Reader X does not open or displays an error message when opening a PDF file, you may need to disable Protected Mode at Startup. This is a security feature that prevents malicious code from running on your computer, but it may also interfere with some PDF files or features. To disable Protected Mode at Startup, follow these steps:

- -

This should allow you to open any PDF file without errors or issues.

-

Check for permission issues

-

If Adobe Reader X does not save or print PDF files, you may need to check for permission issues. This means that you may not have enough access rights to modify or use certain files or folders on your computer. To check for permission issues, follow these steps:

- -

This should resolve any permission issues that may prevent you from saving or printing PDF files.

-

Repair Installation

-

If Adobe Reader X does not launch or shows an error message when launching, you may need to repair the installation. This will fix any damaged or missing components that may affect the performance of Adobe Reader X. To repair the installation, follow these steps:

- -

This should fix any errors or issues that may prevent Adobe Reader X from launching.

-

Force open the files with Adobe Reader X

-

If Adobe Reader X does not open PDF files by default, you may need to force open them with Adobe Reader X. This will make sure that Adobe Reader X is the default program for opening PDF files on your computer. To force open PDF files with Adobe Reader X, follow these steps:

- -

This should make Adobe Reader X the default program for opening PDF files on your computer.

-

Conclusion

-

In this article, we have shown you how to download Adobe Reader X for Windows and Mac, as well as how to troubleshoot some common installation issues. Adobe Reader X is a free software that allows you to view, print, and comment on PDF files. It also offers some advanced features, such as converting PDFs to other file formats, password protecting PDFs, comparing PDFs, and integrating with cloud storage services. With Adobe Reader X, you can access any PDF file on your computer or mobile device with ease and convenience.

-

If you want to learn more about Adobe Reader X, you can visit this page for more information and resources. You can also check out this page for some tips and tricks on how to use Adobe Reader X effectively. We hope you have enjoyed this article and found it helpful. Thank you for reading!

-

FAQs

-

What is the difference between Acrobat Reader and Acrobat Pro?

-

Acrobat Reader is a free software that allows you to view, print, and comment on PDF files. Acrobat Pro is a paid software that allows you to create, edit, convert, sign, and share PDF files. Acrobat Pro also has more features and tools than Acrobat Reader, such as OCR, redaction, optimization, accessibility, and collaboration.

-

How can I update Adobe Reader X to the latest version?

-

You can update Adobe Reader X to the latest version by following these steps:

- -

You can also enable automatic updates by going to Edit > Preferences > Updater and selecting Automatically install updates.

-

How can I open a password-protected PDF with Adobe Reader X?

-

You can open a password-protected PDF with Adobe Reader X by following these steps:

- -

If you do not know the password, you will not be able to open the PDF file. You will need to contact the creator of the PDF file and ask for the password.

-

How can I annotate PDFs with Adobe Reader X?

-

You can annotate PDFs with Adobe Reader X by following these steps:

- -

Your annotations will be saved with the PDF file and can be viewed by anyone who opens it with Adobe Reader X or any other PDF viewer.

-

How can I access my PDFs from anywhere with Adobe Reader X?

-

You can access your PDFs from anywhere with Adobe Reader X by following these steps:

- -

You can also share your PDF files with others, edit them online, or convert them to other file formats with Adobe Document Cloud.

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/CarX Street APK for PC How to Play the Stunning Racing Game on Your Laptop or Desktop.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/CarX Street APK for PC How to Play the Stunning Racing Game on Your Laptop or Desktop.md deleted file mode 100644 index 9f811fa3e61513ff64d97f0ebf2389884088d1a0..0000000000000000000000000000000000000000 --- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/CarX Street APK for PC How to Play the Stunning Racing Game on Your Laptop or Desktop.md +++ /dev/null @@ -1,131 +0,0 @@ -
-

How to Download and Play CarX Street on PC

-

CarX Street is a racing game developed by CarX Technologies, LLC. It is an open-world street racer that lets you explore the large city and its surroundings, from busy city streets to spiral mountain roads and mesmerizing coastal highways. You can race to collect legendary racing cars and display them in your garage, or challenge other players in real network races. You can also build the car of your dreams using part tuning that unlocks all the physics of CarX Technology car behavior.

-

If you are a fan of racing games, you might want to play CarX Street on your PC instead of your mobile device. Playing on PC has many advantages, such as a larger screen, better graphics, smoother performance, and more comfortable controls. In this article, we will show you how to download and install CarX Street on your PC using different emulators. We will also give you some tips and tricks to help you enjoy the game more.

-

carx street apk for pc


Download ===> https://urlin.us/2uSS96



-

What is CarX Street?

-

CarX Street is a simulation racing video game that offers realistic car physics and high-speed drifting. The game also features different map types from around the world, and players can choose from several different game modes. Players can compete against other players, or participate in races and events.

-

Features of CarX Street

-

Some of the features of CarX Street are:

- -

Benefits of playing CarX Street on PC

-

Playing CarX Street on PC has many benefits, such as:

- -

How to download

How to download and install CarX Street on PC

-

If you want to play CarX Street on your PC, you will need to use an Android emulator. An emulator is a software that mimics the Android operating system on your computer, allowing you to run Android apps and games. There are many emulators available, but we will show you how to use three of the most popular ones: BlueStacks, NoxPlayer, and LDPlayer.

-

Using BlueStacks emulator

-

BlueStacks is one of the most widely used Android emulators, with over 500 million users worldwide. It is compatible with both Windows and Mac operating systems, and it has a user-friendly interface and advanced features. Here are the steps to download and install CarX Street on PC using BlueStacks:

-
    -
  1. Download and install BlueStacks on your PC from [1](https://www.bluestacks.com/).
  2. -
  3. Complete Google sign-in to access the Play Store, or do it later.
  4. -
  5. Look for CarX Street in the search bar at the top right corner.
  6. -
  7. Click to install CarX Street from the search results.
  8. -
  9. Complete Google sign-in (if you skipped step 2) to install CarX Street.
  10. -
  11. Click the CarX Street icon on the home screen to start playing.
  12. -
-

Using NoxPlayer emulator

-

NoxPlayer is another popular Android emulator, with over 150 million users worldwide. It is also compatible with both Windows and Mac operating systems, and it has a simple and fast interface and performance. Here are the steps to download and install CarX Street on PC using NoxPlayer:

-
    -
  1. Download and install NoxPlayer on your PC from [5](https://www.bignox.com/).
  2. -
  3. Run the installation package and complete the installation.
  4. -
  5. Open NoxPlayer and search for CarX Street in the Google Play Store.
  6. -
  7. Install the game and launch it to start playing.
  8. -
-

Using LDPlayer emulator

-

LDPlayer is a newer Android emulator, but it has gained popularity among gamers for its high performance and compatibility. It is also compatible with both Windows and Mac operating systems, and it has a smooth and stable interface and features. Here are the steps to download and install CarX Street on PC using LDPlayer:

-

carx street racing game download for pc
-carx street mod apk for pc
-carx street pc emulator
-carx street android game on pc
-carx street free download for pc
-carx street pc version
-carx street pc requirements
-carx street pc gameplay
-carx street pc online
-carx street pc windows 10
-carx street pc bluestacks
-carx street pc noxplayer
-carx street pc ldplayer
-carx street pc steam
-carx street pc review
-carx street pc cheats
-carx street pc hack
-carx street pc controller support
-carx street pc graphics settings
-carx street pc best cars
-carx street pc tips and tricks
-carx street pc update
-carx street pc release date
-carx street pc beta test
-carx street pc download size
-carx street apk for windows 7
-carx street apk for windows 8.1
-carx street apk for macbook
-carx street apk for laptop
-carx street apk for desktop
-carx street apk for chromebook
-carx street apk for linux
-carx street apk for ubuntu
-carx street apk for mac os x
-carx street apk for windows xp
-how to install carx street apk on pc
-how to play carx street apk on pc
-how to run carx street apk on pc
-how to download carx street apk on pc
-how to update carx street apk on pc
-how to uninstall carx street apk on pc
-how to transfer carx street apk from android to pc
-how to sync carx street apk between android and pc
-how to fix carx street apk not working on pc
-how to get unlimited coins in carx street apk on pc
-how to customize cars in carx street apk on pc
-how to change language in carx street apk on pc
-how to connect facebook in carx street apk on pc
-how to record gameplay of carx street apk on pc

-
    -
  1. Download and install LDPlayer on your PC from [6](https://www.ldplayer.net/).
  2. -
  3. Open LDPlayer and search for CarX Street in the LD Store or Google Play Store.
  4. -
  5. Install the game and launch it to start playing.
  6. -
-

Tips and tricks for CarX Street

-

Now that you know how to play CarX Street on your PC, you might want some tips and tricks to help you improve your skills and enjoy the game more. Here are some of them:

-

Follow the tutorial

-

The game has a tutorial that will teach you the basics of driving, racing, drifting, tuning, and more. It is highly recommended that you follow the tutorial before jumping into the action, as it will help you get familiar with the game mechanics and controls. You can also revisit the tutorial anytime from the settings menu if you need a refresher.

-

Roam through the city for more rewards

-

The game has an open world that you can explore at your own pace. You can find hidden spots, shortcuts, secrets, and rewards by roaming through the city. You can also encounter random events, challenges, and races that will give you more money, reputation, or items. Roaming through the city is also a good way to practice your driving skills and test your car's performance.

-

Take part in sprints and clubs

-

The game has two main modes: sprints and clubs. Sprints are short races that last under a minute, where you have to reach the finish line as fast as possible. Clubs are longer, story-driven competitions where you have to join a club, defeat its boss, and prove yourself as the best driver in the city. Both modes offer different rewards and challenges, so try them both out and see which one suits your style more.

-

Go for the best cars and customize them

-

The game has over 50 official vehicles from the best automakers in the world. You can buy them with in-game currency or real money, or earn them by completing tasks or events. You can also customize your car with a detailed car-building system that lets you swap parts, upgrade components, paint colors, add stickers, and more. You can also customize your garage with various decorations and display your car collection. Go for the best cars and make them your own.

-

Conclusion

-

CarX Street is a fun and realistic racing game that lets you experience the thrill of street racing. You can explore the open world, collect and customize your cars, and compete with other players. You can also play CarX Street on your PC using an Android emulator, which will give you many benefits such as a larger screen, better graphics, smoother performance, and more comfortable controls. If you are looking for a racing game that will keep you entertained and challenged, you should give CarX Street a try.

-

FAQs

-

Here are some frequently asked questions about CarX Street:

-

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/CapCut Edit Videos like a Pro with TikToks Official Video Editor and Video Maker - Free Download.md b/spaces/1phancelerku/anime-remove-background/CapCut Edit Videos like a Pro with TikToks Official Video Editor and Video Maker - Free Download.md deleted file mode 100644 index fb2bee50608773153ef1ef44f8b4e233e3036e4e..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/CapCut Edit Videos like a Pro with TikToks Official Video Editor and Video Maker - Free Download.md +++ /dev/null @@ -1,93 +0,0 @@ - -

How to Download and Use CapCut Video Editor for TikTok

-

TikTok is one of the most popular social media platforms for creating and sharing short videos. Whether you want to make funny, educational, or inspirational videos, you need a good video editor to make them stand out. In this article, we will show you how to download and use CapCut, the official video editor and maker app for TikTok.

-

What is CapCut?

-

CapCut is a free video editor and maker app that is compatible with TikTok. It is developed by ByteDance, the same company that owns TikTok. CapCut allows you to edit videos on your mobile device with ease and fun. You can also use it to create videos for other social media platforms, such as YouTube, Instagram, Facebook, and WhatsApp.

-

download capcut video editor for tiktok


DOWNLOAD »»» https://jinyurl.com/2uNMjj



-

CapCut is a free video editor and maker app for TikTok

-

CapCut has everything you need to create stunning, high-quality videos. You can import your own videos and photos or record new ones in the app. You can also access a massive music library and exclusive TikTok songs. You can extract audio from videos or add your own voice-overs. You can also use AI tools to enhance your videos, such as auto captions, background removal, text-to-speech, motion tracking, and more.

-

CapCut offers basic and advanced editing features

-

CapCut has a user-friendly interface that lets you edit videos with simple gestures. You can trim, cut, merge, split, reverse, speed up, slow down, zoom in, zoom out, freeze, and animate your clips. You can also add text, stickers, filters, effects, transitions, and colors to your videos. You can use keyframe animation to customize every setting. You can also use chroma key to remove specific colors from videos. You can apply picture-in-picture (PIP) feature to add video and photo layers above the clip. You can also use the stabilizing feature to keep video footage steady.

-

CapCut supports direct exports to TikTok and other social media platforms

-

CapCut lets you export your videos in custom resolutions and formats. You can export your videos in HD quality and support 4K 60fps exports and smart HDR. You can also adjust the format and share your creativity on TikTok and other social media platforms with one tap.

-

How to Download CapCut for Android and iOS

-

Downloading CapCut is easy and fast. Here are the steps to download CapCut for Android and iOS devices.

-

Download CapCut from Google Play Store or Apple App Store

-

You can download CapCut for free from Google Play Store or Apple App Store. Just search for "CapCut" in the store and tap Install or Get. The app size is about 100 MB.

-

Open CapCut and tap New Project to start editing

-

Once you have downloaded CapCut, open it on your device. You don't need a TikTok account or any other type of account to use CapCut. You can start editing right away by tapping New Project on the home screen.

-

Select a video or photos to edit and tap Add

-

You can select a video or photos from your device gallery or record a new one in the app. You can also use the search feature to find videos and photos online. You can select multiple files and tap Add to import them to your project. You can also rearrange, delete, or duplicate the clips in your timeline.

-

How to download capcut video editor for tiktok on android
-Download capcut video editor for tiktok apk free
-Best capcut video editor for tiktok tutorials and tips
-Download capcut video editor for tiktok for pc windows 10
-Capcut video editor for tiktok review and features
-Download capcut video editor for tiktok mod apk
-Capcut video editor for tiktok vs inshot comparison
-Download capcut video editor for tiktok pro version
-Capcut video editor for tiktok online without download
-Download capcut video editor for tiktok ios iphone ipad
-Capcut video editor for tiktok alternatives and similar apps
-Download capcut video editor for tiktok macbook laptop
-Capcut video editor for tiktok filters and effects guide
-Download capcut video editor for tiktok latest version update
-Capcut video editor for tiktok transitions and stickers tutorial
-Download capcut video editor for tiktok no watermark
-Capcut video editor for tiktok music and sound effects library
-Download capcut video editor for tiktok premium unlocked
-Capcut video editor for tiktok speed and reverse options
-Download capcut video editor for tiktok cracked full version
-Capcut video editor for tiktok crop and rotate tools
-Download capcut video editor for tiktok from google play store
-Capcut video editor for tiktok split and merge videos function
-Download capcut video editor for tiktok from official website
-Capcut video editor for tiktok text and font styles customization
-Download capcut video editor for tiktok with bluestacks emulator
-Capcut video editor for tiktok voice changer and dubbing feature
-Download capcut video editor for tiktok without ads or subscription
-Capcut video editor for tiktok chroma key and green screen effect
-Download capcut video editor for tiktok with qr code scanner
-Capcut video editor for tiktok slideshow and collage maker mode
-Download capcut video editor for tiktok old version apk file
-Capcut video editor for tiktok cutout and background changer tool
-Download capcut video editor for tiktok on amazon fire tablet
-Capcut video editor for tiktok gif and meme generator option
-Download capcut video editor for tiktok on chromebook device
-Capcut video editor for tiktok face swap and beauty filter feature
-Download capcut video editor for tiktok on linux operating system
-Capcut video editor for tiktok animation and drawing effect mode
-Download capcut video editor for tiktok on smart tv or roku device

-

How to Use CapCut to Edit Videos for TikTok

-

Editing videos with CapCut is fun and easy. Here are some tips on how to use CapCut to edit videos for TikTok.

-

Use the editing tools to trim, crop, reverse, speed up, and animate your clips

-

You can use the editing tools at the bottom of the screen to adjust your clips. You can tap Trim to cut out unwanted parts of your video. You can tap Crop to change the aspect ratio and zoom in or out of your video. You can tap Reverse to play your video backwards. You can tap Speed to change the playback speed of your video. You can tap Animate to add motion effects to your video.

-

Add text, stickers, filters, effects, and music to your videos

-

You can add text, stickers, filters, effects, and music to your videos by tapping the icons on the right side of the screen. You can tap Text to add captions, titles, or subtitles to your video. You can tap Sticker to add emojis, icons, or images to your video. You can tap Filter to apply different color presets to your video. You can tap Effect to add various visual effects to your video. You can tap Music to add songs, sound effects, or voice-overs to your video.

-

Use the templates and styles to enhance your videos

-

You can use the templates and styles to enhance your videos by tapping the icons on the left side of the screen. You can tap Template to apply pre-made themes and layouts to your video. You can tap Style to apply different artistic styles and filters to your video.

-

Tap Export to save and share your videos

-

When you are done editing your video, you can tap Export at the top right corner of the screen. You can choose the resolution, format, and quality of your video. You can also enable watermark removal if you want. Then you can tap Save or Share to save your video to your device or share it directly on TikTok or other social media platforms.

-

Benefits of Using CapCut for TikTok Videos

-

Using CapCut for TikTok videos has many benefits. Here are some of them.

-

CapCut is easy to use and versatile

-

CapCut is designed for beginners and professionals alike. It has a simple and intuitive interface that lets you edit videos with ease and fun. It also has a lot of features and options that let you customize your videos according to your preferences and needs.

-

CapCut has a large library of sounds and animations

-

CapCut has a large library of sounds and animations that you can use for free. You can access thousands of songs and sound effects that are updated regularly. You can also use exclusive TikTok songs that are popular and trending. You can also use hundreds of animations that are dynamic and creative.

-

CapCut can create stunning, high-quality videos

-

CapCut can create stunning, high-quality videos that will impress your audience. You can export your videos in HD quality and support 4K 60fps exports and smart HDR. You can also use AI tools that will enhance your videos automatically.

-

Conclusion

-

CapCut is a free video editor and maker app for TikTok that you can download and use on your Android or iOS device. It has everything you need to create stunning, high-quality videos with ease and fun. You can also use it to create videos for other social media platforms, such as YouTube, Instagram, Facebook, and WhatsApp. If you want to make amazing TikTok videos, download CapCut today!

-

Frequently Asked Questions

-

Is CapCut safe?

-

Yes, CapCut is safe and secure. It does not contain any viruses or malware. It also does not collect any personal information from users.

-

Is CapCut free?

-

Yes, CapCut is free and does not have any hidden fees or charges. It also does not have any annoying ads or watermarks.

-

How do I update CapCut?

-

You can update CapCut by going to the Google Play Store or the Apple App Store and tapping Update. You can also enable automatic updates in your device settings.

-

How do I delete CapCut?

-

You can delete CapCut by going to your device settings and tapping Apps or Applications. Then you can find CapCut and tap Uninstall or Delete. You can also delete CapCut by long-pressing the app icon and tapping Remove or Delete.

-

How do I contact CapCut support?

-

You can contact CapCut support by going to the app settings and tapping Feedback or Help. You can also email them at capcut.support@bytedance.com or visit their website at https://www.capcut.net/.

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/2023Liu2023/bingo/Dockerfile b/spaces/2023Liu2023/bingo/Dockerfile deleted file mode 100644 index 3aa2b29b5fc4fa8b8238955acd7f1fde13ce5e1a..0000000000000000000000000000000000000000 --- a/spaces/2023Liu2023/bingo/Dockerfile +++ /dev/null @@ -1,36 +0,0 @@ -FROM node:18 - - -ARG DEBIAN_FRONTEND=noninteractive - -ENV BING_HEADER "" - -# Set home to the user's home directory -ENV HOME=/home/user \ - PATH=/home/user/.local/bin:$PATH - -# Set up a new user named "user" with user ID 1000 -RUN useradd -o -u 1000 user && mkdir -p $HOME/app && chown -R user $HOME - -# Switch to the "user" user -USER user - -# Set the working directory to the user's home directory -WORKDIR $HOME/app - -# Install app dependencies -# A wildcard is used to ensure both package.json AND package-lock.json are copied -# where available (npm@5+) -COPY --chown=user package*.json $HOME/app/ - -RUN npm install - -# Copy the current directory contents into the container at $HOME/app setting the owner to the user -COPY --chown=user . $HOME/app/ - -RUN npm run build - -ENV PORT 7860 -EXPOSE 7860 - -CMD npm start diff --git a/spaces/2023Liu2023/bingo/src/app/page.tsx b/spaces/2023Liu2023/bingo/src/app/page.tsx deleted file mode 100644 index 0dff3431b098ce4fe282cc83fc87a93a28a43090..0000000000000000000000000000000000000000 --- a/spaces/2023Liu2023/bingo/src/app/page.tsx +++ /dev/null @@ -1,15 +0,0 @@ -import dynamic from 'next/dynamic' - -const DynamicComponentWithNoSSR = dynamic( - () => import('../components/chat'), - { ssr: false } -) - -export default function IndexPage() { - return ( - <> -
- - - ) -} diff --git a/spaces/2023Liu2023/bingo/src/components/chat.tsx b/spaces/2023Liu2023/bingo/src/components/chat.tsx deleted file mode 100644 index a37ab1cc96ca2e6bfd9acbe313a8d946bfd5c3d4..0000000000000000000000000000000000000000 --- a/spaces/2023Liu2023/bingo/src/components/chat.tsx +++ /dev/null @@ -1,93 +0,0 @@ -'use client' - -import { useCallback, useEffect, useMemo, useState } from 'react' -import { useAtom } from 'jotai' -import Image from 'next/image' -import { cn } from '@/lib/utils' -import { ChatList } from '@/components/chat-list' -import { ChatPanel } from '@/components/chat-panel' -import { WelcomeScreen } from '@/components/welcome-screen' -import { ChatScrollAnchor } from '@/components/chat-scroll-anchor' -import { ToneSelector } from './tone-selector' -import { ChatHeader } from './chat-header' -import { ChatSuggestions } from './chat-suggestions' -import { bingConversationStyleAtom } from '@/state' -import { ButtonScrollToBottom } from '@/components/button-scroll-to-bottom' -import StopIcon from '@/assets/images/stop.svg' -import { useBing } from '@/lib/hooks/use-bing' -import { ChatMessageModel } from '@/lib/bots/bing/types' -import { ChatNotification } from './chat-notification' -import { Settings } from './settings' -import { ChatHistory } from './chat-history' - -export type ChatProps = React.ComponentProps<'div'> & { initialMessages?: ChatMessageModel[] } - -export default function Chat({ className }: ChatProps) { - - const [bingStyle, setBingStyle] = useAtom(bingConversationStyleAtom) - const { - messages, - sendMessage, - resetConversation, - stopGenerating, - setInput, - bot, - input, - generating, - isSpeaking, - uploadImage, - attachmentList, - setAttachmentList, - } = useBing() - - useEffect(() => { - window.scrollTo({ - top: document.body.offsetHeight, - behavior: 'smooth' - }) - }, []) - - return ( -
- -
- - - - {messages.length ? ( - <> - - - - - - {generating ? ( -
- -
- ) : null} - - ) : null} -
- - -
- ) -} diff --git a/spaces/AIFILMS/audioldm-text-to-audio-generation/audioldm/clap/open_clip/transform.py b/spaces/AIFILMS/audioldm-text-to-audio-generation/audioldm/clap/open_clip/transform.py deleted file mode 100644 index 77aaa722c4a5544ac50de6df35d3e922f63b111d..0000000000000000000000000000000000000000 --- a/spaces/AIFILMS/audioldm-text-to-audio-generation/audioldm/clap/open_clip/transform.py +++ /dev/null @@ -1,45 +0,0 @@ -from torchvision.transforms import ( - Normalize, - Compose, - RandomResizedCrop, - InterpolationMode, - ToTensor, - Resize, - CenterCrop, -) - - -def _convert_to_rgb(image): - return image.convert("RGB") - - -def image_transform( - image_size: int, - is_train: bool, - mean=(0.48145466, 0.4578275, 0.40821073), - std=(0.26862954, 0.26130258, 0.27577711), -): - normalize = Normalize(mean=mean, std=std) - if is_train: - return Compose( - [ - RandomResizedCrop( - image_size, - scale=(0.9, 1.0), - interpolation=InterpolationMode.BICUBIC, - ), - _convert_to_rgb, - ToTensor(), - normalize, - ] - ) - else: - return Compose( - [ - Resize(image_size, interpolation=InterpolationMode.BICUBIC), - CenterCrop(image_size), - _convert_to_rgb, - ToTensor(), - normalize, - ] - ) diff --git a/spaces/AIGC-Audio/AudioGPT/text_to_speech/tasks/tts/ps_adv.py b/spaces/AIGC-Audio/AudioGPT/text_to_speech/tasks/tts/ps_adv.py deleted file mode 100644 index f86409254b8d0d5f00de82cc0a9eed93cc8a40dc..0000000000000000000000000000000000000000 --- a/spaces/AIGC-Audio/AudioGPT/text_to_speech/tasks/tts/ps_adv.py +++ /dev/null @@ -1,374 +0,0 @@ -import os -import torch -import torch.nn.functional as F -import torch.nn as nn -import numpy as np - -from text_to_speech.modules.tts.portaspeech.portaspeech import PortaSpeech -from text_to_speech.modules.tts.syntaspeech.multi_window_disc import Discriminator -from tasks.tts.fs import FastSpeechTask -from text_to_speech.utils.audio.align import mel2token_to_dur -from text_to_speech.utils.commons.hparams import hparams -from text_to_speech.utils.metrics.diagonal_metrics import get_focus_rate, get_phone_coverage_rate, get_diagonal_focus_rate -from text_to_speech.utils.nn.model_utils import num_params -from text_to_speech.utils.commons.tensor_utils import tensors_to_scalars -from text_to_speech.utils.audio.pitch.utils import denorm_f0, norm_f0 -from text_to_speech.utils.audio.pitch_extractors import get_pitch -from text_to_speech.utils.metrics.dtw import dtw as DTW - -from text_to_speech.utils.plot.plot import spec_to_figure -from text_to_speech.utils.text.text_encoder import build_token_encoder - - -class PortaSpeechAdvTask(FastSpeechTask): - def __init__(self): - super().__init__() - data_dir = hparams['binary_data_dir'] - self.word_encoder = build_token_encoder(f'{data_dir}/word_set.json') - self.build_disc_model() - self.mse_loss_fn = torch.nn.MSELoss() - - def build_tts_model(self): - ph_dict_size = len(self.token_encoder) - word_dict_size = len(self.word_encoder) - self.model = PortaSpeech(ph_dict_size, word_dict_size, hparams) - - self.gen_params = [p for p in self.model.parameters() if p.requires_grad] - self.dp_params = [p for k, p in self.model.named_parameters() if (('dur_predictor' in k) and p.requires_grad)] - self.gen_params_except_dp = [p for k, p in self.model.named_parameters() if (('dur_predictor' not in k) and p.requires_grad)] - self.bert_params = [p for k, p in self.model.named_parameters() if (('bert' in k) and p.requires_grad)] - self.gen_params_except_bert_and_dp = [p for k, p in self.model.named_parameters() if ('dur_predictor' not in k) and ('bert' not in k) and p.requires_grad ] - - self.use_bert = True if len(self.bert_params) > 0 else False - - def build_disc_model(self): - disc_win_num = hparams['disc_win_num'] - h = hparams['mel_disc_hidden_size'] - self.mel_disc = Discriminator( - time_lengths=[32, 64, 128][:disc_win_num], - freq_length=80, hidden_size=h, kernel=(3, 3) - ) - self.disc_params = list(self.mel_disc.parameters()) - - def on_train_start(self): - super().on_train_start() - for n, m in self.model.named_children(): - num_params(m, model_name=n) - if hasattr(self.model, 'fvae'): - for n, m in self.model.fvae.named_children(): - num_params(m, model_name=f'fvae.{n}') - - def _training_step(self, sample, batch_idx, optimizer_idx): - loss_output = {} - loss_weights = {} - disc_start = self.global_step >= hparams["disc_start_steps"] and hparams['lambda_mel_adv'] > 0 - if optimizer_idx == 0: - ####################### - # Generator # - ####################### - loss_output, model_out = self.run_model(sample, infer=False) - self.model_out_gt = self.model_out = \ - {k: v.detach() for k, v in model_out.items() if isinstance(v, torch.Tensor)} - if disc_start: - mel_p = model_out['mel_out'] - if hasattr(self.model, 'out2mel'): - mel_p = self.model.out2mel(mel_p) - o_ = self.mel_disc(mel_p) - p_, pc_ = o_['y'], o_['y_c'] - if p_ is not None: - loss_output['a'] = self.mse_loss_fn(p_, p_.new_ones(p_.size())) - loss_weights['a'] = hparams['lambda_mel_adv'] - if pc_ is not None: - loss_output['ac'] = self.mse_loss_fn(pc_, pc_.new_ones(pc_.size())) - loss_weights['ac'] = hparams['lambda_mel_adv'] - else: - ####################### - # Discriminator # - ####################### - if disc_start and self.global_step % hparams['disc_interval'] == 0: - model_out = self.model_out_gt - mel_g = sample['mels'] - mel_p = model_out['mel_out'] - o = self.mel_disc(mel_g) - p, pc = o['y'], o['y_c'] - o_ = self.mel_disc(mel_p) - p_, pc_ = o_['y'], o_['y_c'] - if p_ is not None: - loss_output["r"] = self.mse_loss_fn(p, p.new_ones(p.size())) - loss_output["f"] = self.mse_loss_fn(p_, p_.new_zeros(p_.size())) - if pc_ is not None: - loss_output["rc"] = self.mse_loss_fn(pc, pc.new_ones(pc.size())) - loss_output["fc"] = self.mse_loss_fn(pc_, pc_.new_zeros(pc_.size())) - total_loss = sum([loss_weights.get(k, 1) * v for k, v in loss_output.items() if isinstance(v, torch.Tensor) and v.requires_grad]) - loss_output['batch_size'] = sample['txt_tokens'].size()[0] - return total_loss, loss_output - - def run_model(self, sample, infer=False, *args, **kwargs): - txt_tokens = sample['txt_tokens'] - word_tokens = sample['word_tokens'] - spk_embed = sample.get('spk_embed') - spk_id = sample.get('spk_ids') - if not infer: - output = self.model(txt_tokens, word_tokens, - ph2word=sample['ph2word'], - mel2word=sample['mel2word'], - mel2ph=sample['mel2ph'], - word_len=sample['word_lengths'].max(), - tgt_mels=sample['mels'], - pitch=sample.get('pitch'), - spk_embed=spk_embed, - spk_id=spk_id, - infer=False, - global_step=self.global_step, - graph_lst=sample['graph_lst'], - etypes_lst=sample['etypes_lst'], - bert_feats=sample.get("bert_feats"), - cl_feats=sample.get("cl_feats") - ) - losses = {} - losses['kl_v'] = output['kl'].detach() - losses_kl = output['kl'] - losses_kl = torch.clamp(losses_kl, min=hparams['kl_min']) - losses_kl = min(self.global_step / hparams['kl_start_steps'], 1) * losses_kl - losses_kl = losses_kl * hparams['lambda_kl'] - losses['kl'] = losses_kl - - self.add_mel_loss(output['mel_out'], sample['mels'], losses) - if hparams['dur_level'] == 'word': - self.add_dur_loss( - output['dur'], sample['mel2word'], sample['word_lengths'], sample['txt_tokens'], losses) - self.get_attn_stats(output['attn'], sample, losses) - else: - super(PortaSpeechAdvTask, self).add_dur_loss(output['dur'], sample['mel2ph'], sample['txt_tokens'], losses) - return losses, output - else: - use_gt_dur = kwargs.get('infer_use_gt_dur', hparams['use_gt_dur']) - output = self.model( - txt_tokens, word_tokens, - ph2word=sample['ph2word'], - word_len=sample['word_lengths'].max(), - pitch=sample.get('pitch'), - mel2ph=sample['mel2ph'] if use_gt_dur else None, - mel2word=sample['mel2word'] if use_gt_dur else None, - tgt_mels=sample['mels'], - infer=True, - spk_embed=spk_embed, - spk_id=spk_id, - graph_lst=sample['graph_lst'], - etypes_lst=sample['etypes_lst'], - bert_feats=sample.get("bert_feats"), - cl_feats=sample.get("cl_feats") - ) - return output - - def add_dur_loss(self, dur_pred, mel2token, word_len, txt_tokens, losses=None): - T = word_len.max() - dur_gt = mel2token_to_dur(mel2token, T).float() - nonpadding = (torch.arange(T).to(dur_pred.device)[None, :] < word_len[:, None]).float() - dur_pred = dur_pred * nonpadding - dur_gt = dur_gt * nonpadding - wdur = F.l1_loss((dur_pred + 1).log(), (dur_gt + 1).log(), reduction='none') - wdur = (wdur * nonpadding).sum() / nonpadding.sum() - - if hparams['lambda_word_dur'] > 0: - losses['wdur'] = wdur * hparams['lambda_word_dur'] - if hparams['lambda_sent_dur'] > 0: - sent_dur_p = dur_pred.sum(-1) - sent_dur_g = dur_gt.sum(-1) - sdur_loss = F.l1_loss(sent_dur_p, sent_dur_g, reduction='mean') - losses['sdur'] = sdur_loss.mean() * hparams['lambda_sent_dur'] - - with torch.no_grad(): - # calculate word-level abs_dur_error in micro-second - abs_word_dur_error = F.l1_loss(dur_pred , dur_gt, reduction='none') - abs_word_dur_error = (abs_word_dur_error * nonpadding).sum() / nonpadding.sum() - abs_word_dur_error = abs_word_dur_error * hparams['hop_size'] / hparams['audio_sample_rate'] * 1000 - losses['abs_word_dur_error'] = abs_word_dur_error - # calculate word-level abs_dur_error in second - sent_dur_p = dur_pred.sum(-1) - sent_dur_g = dur_gt.sum(-1) - abs_sent_dur_error = F.l1_loss(sent_dur_p, sent_dur_g, reduction='mean').mean() - abs_sent_dur_error = abs_sent_dur_error * hparams['hop_size'] / hparams['audio_sample_rate'] - losses['abs_sent_dur_error'] = abs_sent_dur_error - - def validation_step(self, sample, batch_idx): - outputs = {} - outputs['losses'] = {} - outputs['losses'], model_out = self.run_model(sample) - outputs['total_loss'] = sum(outputs['losses'].values()) - outputs['nsamples'] = sample['nsamples'] - outputs = tensors_to_scalars(outputs) - if self.global_step % hparams['valid_infer_interval'] == 0 \ - and batch_idx < hparams['num_valid_plots']: - valid_results = self.save_valid_result(sample, batch_idx, model_out) - wav_gt = valid_results['wav_gt'] - mel_gt = valid_results['mel_gt'] - wav_pred = valid_results['wav_pred'] - mel_pred = valid_results['mel_pred'] - f0_pred_, _ = get_pitch(wav_pred, mel_pred, hparams) - f0_gt_, _ = get_pitch(wav_gt, mel_gt, hparams) - manhattan_distance = lambda x, y: np.abs(x - y) - dist, cost, acc, path = DTW(f0_pred_, f0_gt_, manhattan_distance) - outputs['losses']['f0_dtw'] = dist / len(f0_gt_) - return outputs - - def save_valid_result(self, sample, batch_idx, model_out): - sr = hparams['audio_sample_rate'] - f0_gt = None - mel_out = model_out['mel_out'] - if sample.get('f0') is not None: - f0_gt = denorm_f0(sample['f0'][0].cpu(), sample['uv'][0].cpu()) - self.plot_mel(batch_idx, sample['mels'], mel_out, f0s=f0_gt) - - # if self.global_step > 0: - wav_pred = self.vocoder.spec2wav(mel_out[0].cpu(), f0=f0_gt) - self.logger.add_audio(f'wav_val_{batch_idx}', wav_pred, self.global_step, sr) - # with gt duration - model_out = self.run_model(sample, infer=True, infer_use_gt_dur=True) - dur_info = self.get_plot_dur_info(sample, model_out) - del dur_info['dur_pred'] - wav_pred = self.vocoder.spec2wav(model_out['mel_out'][0].cpu(), f0=f0_gt) - self.logger.add_audio(f'wav_gdur_{batch_idx}', wav_pred, self.global_step, sr) - self.plot_mel(batch_idx, sample['mels'], model_out['mel_out'][0], f'mel_gdur_{batch_idx}', - dur_info=dur_info, f0s=f0_gt) - - # with pred duration - if not hparams['use_gt_dur']: - model_out = self.run_model(sample, infer=True, infer_use_gt_dur=False) - dur_info = self.get_plot_dur_info(sample, model_out) - self.plot_mel(batch_idx, sample['mels'], model_out['mel_out'][0], f'mel_pdur_{batch_idx}', - dur_info=dur_info, f0s=f0_gt) - wav_pred = self.vocoder.spec2wav(model_out['mel_out'][0].cpu(), f0=f0_gt) - self.logger.add_audio(f'wav_pdur_{batch_idx}', wav_pred, self.global_step, sr) - # gt wav - mel_gt = sample['mels'][0].cpu() - wav_gt = self.vocoder.spec2wav(mel_gt, f0=f0_gt) - if self.global_step <= hparams['valid_infer_interval']: - self.logger.add_audio(f'wav_gt_{batch_idx}', wav_gt, self.global_step, sr) - - # add attn plot - if self.global_step > 0 and hparams['dur_level'] == 'word': - self.logger.add_figure(f'attn_{batch_idx}', spec_to_figure(model_out['attn'][0]), self.global_step) - - return {'wav_gt': wav_gt, 'wav_pred': wav_pred, 'mel_gt': mel_gt, 'mel_pred': model_out['mel_out'][0].cpu()} - - def get_attn_stats(self, attn, sample, logging_outputs, prefix=''): - # diagonal_focus_rate - txt_lengths = sample['txt_lengths'].float() - mel_lengths = sample['mel_lengths'].float() - src_padding_mask = sample['txt_tokens'].eq(0) - target_padding_mask = sample['mels'].abs().sum(-1).eq(0) - src_seg_mask = sample['txt_tokens'].eq(self.seg_idx) - attn_ks = txt_lengths.float() / mel_lengths.float() - - focus_rate = get_focus_rate(attn, src_padding_mask, target_padding_mask).mean().data - phone_coverage_rate = get_phone_coverage_rate( - attn, src_padding_mask, src_seg_mask, target_padding_mask).mean() - diagonal_focus_rate, diag_mask = get_diagonal_focus_rate( - attn, attn_ks, mel_lengths, src_padding_mask, target_padding_mask) - logging_outputs[f'{prefix}fr'] = focus_rate.mean().data - logging_outputs[f'{prefix}pcr'] = phone_coverage_rate.mean().data - logging_outputs[f'{prefix}dfr'] = diagonal_focus_rate.mean().data - - def get_plot_dur_info(self, sample, model_out): - if hparams['dur_level'] == 'word': - T_txt = sample['word_lengths'].max() - dur_gt = mel2token_to_dur(sample['mel2word'], T_txt)[0] - dur_pred = model_out['dur'] if 'dur' in model_out else dur_gt - txt = sample['ph_words'][0].split(" ") - else: - T_txt = sample['txt_tokens'].shape[1] - dur_gt = mel2token_to_dur(sample['mel2ph'], T_txt)[0] - dur_pred = model_out['dur'] if 'dur' in model_out else dur_gt - txt = self.token_encoder.decode(sample['txt_tokens'][0].cpu().numpy()) - txt = txt.split(" ") - return {'dur_gt': dur_gt, 'dur_pred': dur_pred, 'txt': txt} - - def build_optimizer(self, model): - - optimizer_gen = torch.optim.AdamW( - self.gen_params, - lr=hparams['lr'], - betas=(hparams['optimizer_adam_beta1'], hparams['optimizer_adam_beta2']), - weight_decay=hparams['weight_decay']) - - optimizer_disc = torch.optim.AdamW( - self.disc_params, - lr=hparams['disc_lr'], - betas=(hparams['optimizer_adam_beta1'], hparams['optimizer_adam_beta2']), - **hparams["discriminator_optimizer_params"]) if len(self.disc_params) > 0 else None - - return [optimizer_gen, optimizer_disc] - - def build_scheduler(self, optimizer): - return [ - FastSpeechTask.build_scheduler(self, optimizer[0]), # Generator Scheduler - torch.optim.lr_scheduler.StepLR(optimizer=optimizer[1], # Discriminator Scheduler - **hparams["discriminator_scheduler_params"]), - ] - - def on_before_optimization(self, opt_idx): - if opt_idx == 0: - nn.utils.clip_grad_norm_(self.dp_params, hparams['clip_grad_norm']) - if self.use_bert: - nn.utils.clip_grad_norm_(self.bert_params, hparams['clip_grad_norm']) - nn.utils.clip_grad_norm_(self.gen_params_except_bert_and_dp, hparams['clip_grad_norm']) - else: - nn.utils.clip_grad_norm_(self.gen_params_except_dp, hparams['clip_grad_norm']) - else: - nn.utils.clip_grad_norm_(self.disc_params, hparams["clip_grad_norm"]) - - def on_after_optimization(self, epoch, batch_idx, optimizer, optimizer_idx): - if self.scheduler is not None: - self.scheduler[0].step(self.global_step // hparams['accumulate_grad_batches']) - self.scheduler[1].step(self.global_step // hparams['accumulate_grad_batches']) - - ############ - # infer - ############ - def test_start(self): - super().test_start() - if hparams.get('save_attn', False): - os.makedirs(f'{self.gen_dir}/attn', exist_ok=True) - self.model.store_inverse_all() - - def test_step(self, sample, batch_idx): - assert sample['txt_tokens'].shape[0] == 1, 'only support batch_size=1 in inference' - outputs = self.run_model(sample, infer=True) - text = sample['text'][0] - item_name = sample['item_name'][0] - tokens = sample['txt_tokens'][0].cpu().numpy() - mel_gt = sample['mels'][0].cpu().numpy() - mel_pred = outputs['mel_out'][0].cpu().numpy() - mel2ph = sample['mel2ph'][0].cpu().numpy() - mel2ph_pred = None - str_phs = self.token_encoder.decode(tokens, strip_padding=True) - base_fn = f'[{batch_idx:06d}][{item_name.replace("%", "_")}][%s]' - if text is not None: - base_fn += text.replace(":", "$3A")[:80] - base_fn = base_fn.replace(' ', '_') - gen_dir = self.gen_dir - wav_pred = self.vocoder.spec2wav(mel_pred) - self.saving_result_pool.add_job(self.save_result, args=[ - wav_pred, mel_pred, base_fn % 'P', gen_dir, str_phs, mel2ph_pred]) - if hparams['save_gt']: - wav_gt = self.vocoder.spec2wav(mel_gt) - self.saving_result_pool.add_job(self.save_result, args=[ - wav_gt, mel_gt, base_fn % 'G', gen_dir, str_phs, mel2ph]) - if hparams.get('save_attn', False): - attn = outputs['attn'][0].cpu().numpy() - np.save(f'{gen_dir}/attn/{item_name}.npy', attn) - # save f0 for pitch dtw - f0_pred_, _ = get_pitch(wav_pred, mel_pred, hparams) - f0_gt_, _ = get_pitch(wav_gt, mel_gt, hparams) - np.save(f'{gen_dir}/f0/{item_name}.npy', f0_pred_) - np.save(f'{gen_dir}/f0/{item_name}_gt.npy', f0_gt_) - - print(f"Pred_shape: {mel_pred.shape}, gt_shape: {mel_gt.shape}") - return { - 'item_name': item_name, - 'text': text, - 'ph_tokens': self.token_encoder.decode(tokens.tolist()), - 'wav_fn_pred': base_fn % 'P', - 'wav_fn_gt': base_fn % 'G', - } diff --git a/spaces/AIGText/GlyphControl/ldm/models/diffusion/__init__.py b/spaces/AIGText/GlyphControl/ldm/models/diffusion/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/mmpretrain/work_dirs/mobilevit-small_4xb32_2000e_3c_noF/mobilevit-small_4xb32_2000e_3c_noF.py b/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/mmpretrain/work_dirs/mobilevit-small_4xb32_2000e_3c_noF/mobilevit-small_4xb32_2000e_3c_noF.py deleted file mode 100644 index 1dd70453e6fedc075f30a51e736d7c99f36c584f..0000000000000000000000000000000000000000 --- a/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/mmpretrain/work_dirs/mobilevit-small_4xb32_2000e_3c_noF/mobilevit-small_4xb32_2000e_3c_noF.py +++ /dev/null @@ -1,137 +0,0 @@ -model = dict( - type='ImageClassifier', - backbone=dict(type='MobileViT', arch='small'), - neck=dict(type='GlobalAveragePooling'), - head=dict( - type='LinearClsHead', - num_classes=7, - in_channels=640, - loss=dict(type='CrossEntropyLoss', loss_weight=1.0), - topk=( - 1, - 3, - ))) -dataset_type = 'CustomDataset' -data_preprocessor = dict( - num_classes=6, mean=[ - 0, - 0, - 0, - ], std=[ - 255, - 255, - 255, - ], to_rgb=False) -train_pipeline = [ - dict(type='LoadImageFromFile'), - dict(type='RandomResizedCrop', scale=224), - dict(type='RandomFlip', prob=0.5, direction='horizontal'), - dict(type='PackInputs'), -] -test_pipeline = [ - dict(type='LoadImageFromFile'), - dict(type='ResizeEdge', scale=288, edge='short'), - dict(type='CenterCrop', crop_size=256), - dict(type='PackInputs'), -] -train_dataloader = dict( - pin_memory=True, - persistent_workers=True, - collate_fn=dict(type='default_collate'), - batch_size=32, - num_workers=5, - dataset=dict( - type='CustomDataset', - data_root='data', - with_label=True, - ann_file='', - data_prefix='train', - pipeline=[ - dict(type='LoadImageFromFile'), - dict(type='RandomResizedCrop', scale=224), - dict(type='RandomFlip', prob=0.5, direction='horizontal'), - dict(type='PackInputs'), - ]), - sampler=dict(type='DefaultSampler', shuffle=True)) -val_dataloader = dict( - pin_memory=True, - persistent_workers=True, - collate_fn=dict(type='default_collate'), - batch_size=32, - num_workers=5, - dataset=dict( - type='CustomDataset', - data_root='data', - with_label=True, - ann_file='', - data_prefix='val', - pipeline=[ - dict(type='LoadImageFromFile'), - dict(type='ResizeEdge', scale=288, edge='short'), - dict(type='CenterCrop', crop_size=256), - dict(type='PackInputs'), - ]), - sampler=dict(type='DefaultSampler', shuffle=False)) -val_evaluator = dict( - type='Accuracy', topk=( - 1, - 3, - )) -test_dataloader = dict( - pin_memory=True, - persistent_workers=True, - collate_fn=dict(type='default_collate'), - batch_size=32, - num_workers=5, - dataset=dict( - type='CustomDataset', - data_root='data', - with_label=True, - ann_file='', - data_prefix='val', - pipeline=[ - dict(type='LoadImageFromFile'), - dict(type='ResizeEdge', scale=288, edge='short'), - dict(type='CenterCrop', crop_size=256), - dict(type='PackInputs'), - ]), - sampler=dict(type='DefaultSampler', shuffle=False)) -test_evaluator = dict( - type='Accuracy', topk=( - 1, - 3, - )) -default_scope = 'mmpretrain' -default_hooks = dict( - timer=dict(type='IterTimerHook'), - logger=dict(type='LoggerHook', interval=10), - param_scheduler=dict(type='ParamSchedulerHook'), - checkpoint=dict(type='CheckpointHook', save_best='auto', interval=10), - sampler_seed=dict(type='DistSamplerSeedHook'), - visualization=dict(type='VisualizationHook', enable=False)) -env_cfg = dict( - cudnn_benchmark=False, - mp_cfg=dict(mp_start_method='fork', opencv_num_threads=0), - dist_cfg=dict(backend='nccl')) -vis_backends = [ - dict(type='LocalVisBackend'), -] -visualizer = dict( - type='UniversalVisualizer', - vis_backends=[ - dict(type='LocalVisBackend'), - dict(type='WandbVisBackend'), - ]) -log_level = 'INFO' -load_from = None -resume = False -randomness = dict(seed=None, deterministic=False) -optim_wrapper = dict( - optimizer=dict(type='SGD', lr=0.1, momentum=0.9, weight_decay=0.0001)) -param_scheduler = dict(type='StepLR', by_epoch=True, step_size=10, gamma=0.98) -train_cfg = dict(by_epoch=True, max_epochs=2000, val_interval=10) -val_cfg = dict() -test_cfg = dict() -auto_scale_lr = dict(base_batch_size=256) -launcher = 'pytorch' -work_dir = './work_dirs/mobilevit-small_4xb32_2000e_3c_noF' diff --git a/spaces/AgProfile/GradioGenOpenAi/README.md b/spaces/AgProfile/GradioGenOpenAi/README.md deleted file mode 100644 index cd850fb09b770906e7e24e8e79dc15365e1127aa..0000000000000000000000000000000000000000 --- a/spaces/AgProfile/GradioGenOpenAi/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: GradioGenOpenAi -emoji: ⚡ -colorFrom: indigo -colorTo: green -sdk: gradio -sdk_version: 3.39.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/maker/builders/CreateButtons.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/maker/builders/CreateButtons.js deleted file mode 100644 index 364c822546879678f7d6cb6cd546451f1d802055..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/maker/builders/CreateButtons.js +++ /dev/null @@ -1,18 +0,0 @@ -import MergeStyle from './utils/MergeStyle.js'; -import Buttons from '../../buttons/Buttons.js'; -import CreateChild from './utils/CreateChild.js'; -import CreateChildren from './utils/CreateChildren.js'; - -var CreateButtons = function (scene, data, view, styles, customBuilders) { - data = MergeStyle(data, styles); - - // Replace data by child game object - CreateChild(scene, data, 'background', view, styles, customBuilders); - CreateChildren(scene, data, 'buttons', view, styles, customBuilders); - - var gameObject = new Buttons(scene, data); - scene.add.existing(gameObject); - return gameObject; -}; - -export default CreateButtons; \ No newline at end of file diff --git a/spaces/Akmyradov/TurkmenTTSweSTT/uroman/bin/uroman-tsv.sh b/spaces/Akmyradov/TurkmenTTSweSTT/uroman/bin/uroman-tsv.sh deleted file mode 100644 index adb81f4894a0539d44ad4370eda029694211e82b..0000000000000000000000000000000000000000 --- a/spaces/Akmyradov/TurkmenTTSweSTT/uroman/bin/uroman-tsv.sh +++ /dev/null @@ -1,28 +0,0 @@ -#!/usr/bin/env bash -# Created by Thamme Gowda on June 17, 2019 - -DIR=$(dirname "${BASH_SOURCE[0]}") # get the directory name -# DIR=$(realpath "${DIR}") # resolve its full path if need be - -if [[ $# -lt 1 || $# -gt 2 ]]; then - >&2 echo "ERROR: invalid args" - >&2 echo "Usage: []" - exit 2 -fi - -INP=$1 -OUT=$2 - -CMD=$DIR/uroman.pl - -function romanize(){ - paste <(cut -f1 $INP) <(cut -f2 $INP | $CMD) -} - -if [[ -n $OUT ]]; then - romanize > $OUT -else - romanize -fi - - diff --git a/spaces/AlexZou/Deploy_Restoration/net/utils.py b/spaces/AlexZou/Deploy_Restoration/net/utils.py deleted file mode 100644 index 857c04df854b73c541277f14970100198f9420ef..0000000000000000000000000000000000000000 --- a/spaces/AlexZou/Deploy_Restoration/net/utils.py +++ /dev/null @@ -1,86 +0,0 @@ -import math -import torch -import torch.nn as nn -import numpy as np -from skimage.measure.simple_metrics import compare_psnr -from torchvision import models - - -def weights_init_kaiming(m): - classname = m.__class__.__name__ - if classname.find('Conv') != -1: - nn.init.kaiming_normal(m.weight.data, a=0, mode='fan_in') - elif classname.find('Linear') != -1: - nn.init.kaiming_normal(m.weight.data, a=0, mode='fan_in') - elif classname.find('BatchNorm') != -1: - # nn.init.uniform(m.weight.data, 1.0, 0.02) - m.weight.data.normal_(mean=0, std=math.sqrt(2./9./64.)).clamp_(-0.025,0.025) - nn.init.constant(m.bias.data, 0.0) - -class VGG19_PercepLoss(nn.Module): - """ Calculates perceptual loss in vgg19 space - """ - def __init__(self, _pretrained_=True): - super(VGG19_PercepLoss, self).__init__() - self.vgg = models.vgg19(pretrained=_pretrained_).features - for param in self.vgg.parameters(): - param.requires_grad_(False) - - def get_features(self, image, layers=None): - if layers is None: - layers = {'30': 'conv5_2'} # may add other layers - features = {} - x = image - for name, layer in self.vgg._modules.items(): - x = layer(x) - if name in layers: - features[layers[name]] = x - return features - - def forward(self, pred, true, layer='conv5_2'): - true_f = self.get_features(true) - pred_f = self.get_features(pred) - return torch.mean((true_f[layer]-pred_f[layer])**2) - - -def batch_PSNR(img, imclean, data_range): - Img = img.data.cpu().numpy().astype(np.float32) - Iclean = imclean.data.cpu().numpy().astype(np.float32) - PSNR = 0 - for i in range(Img.shape[0]): - PSNR += compare_psnr(Iclean[i,:,:,:], Img[i,:,:,:], data_range=data_range) - return (PSNR/Img.shape[0]) - -def data_augmentation(image, mode): - out = np.transpose(image, (1,2,0)) - #out = image - if mode == 0: - # original - out = out - elif mode == 1: - # flip up and down - out = np.flipud(out) - elif mode == 2: - # rotate counterwise 90 degree - out = np.rot90(out) - elif mode == 3: - # rotate 90 degree and flip up and down - out = np.rot90(out) - out = np.flipud(out) - elif mode == 4: - # rotate 180 degree - out = np.rot90(out, k=2) - elif mode == 5: - # rotate 180 degree and flip - out = np.rot90(out, k=2) - out = np.flipud(out) - elif mode == 6: - # rotate 270 degree - out = np.rot90(out, k=3) - elif mode == 7: - # rotate 270 degree and flip - out = np.rot90(out, k=3) - out = np.flipud(out) - return np.transpose(out, (2,0,1)) - #return out - diff --git a/spaces/Amrrs/DragGan-Inversion/PTI/torch_utils/misc.py b/spaces/Amrrs/DragGan-Inversion/PTI/torch_utils/misc.py deleted file mode 100644 index 7829f4d9f168557ce8a9a6dec289aa964234cb8c..0000000000000000000000000000000000000000 --- a/spaces/Amrrs/DragGan-Inversion/PTI/torch_utils/misc.py +++ /dev/null @@ -1,262 +0,0 @@ -# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -import re -import contextlib -import numpy as np -import torch -import warnings -import dnnlib - -#---------------------------------------------------------------------------- -# Cached construction of constant tensors. Avoids CPU=>GPU copy when the -# same constant is used multiple times. - -_constant_cache = dict() - -def constant(value, shape=None, dtype=None, device=None, memory_format=None): - value = np.asarray(value) - if shape is not None: - shape = tuple(shape) - if dtype is None: - dtype = torch.get_default_dtype() - if device is None: - device = torch.device('cpu') - if memory_format is None: - memory_format = torch.contiguous_format - - key = (value.shape, value.dtype, value.tobytes(), shape, dtype, device, memory_format) - tensor = _constant_cache.get(key, None) - if tensor is None: - tensor = torch.as_tensor(value.copy(), dtype=dtype, device=device) - if shape is not None: - tensor, _ = torch.broadcast_tensors(tensor, torch.empty(shape)) - tensor = tensor.contiguous(memory_format=memory_format) - _constant_cache[key] = tensor - return tensor - -#---------------------------------------------------------------------------- -# Replace NaN/Inf with specified numerical values. - -try: - nan_to_num = torch.nan_to_num # 1.8.0a0 -except AttributeError: - def nan_to_num(input, nan=0.0, posinf=None, neginf=None, *, out=None): # pylint: disable=redefined-builtin - assert isinstance(input, torch.Tensor) - if posinf is None: - posinf = torch.finfo(input.dtype).max - if neginf is None: - neginf = torch.finfo(input.dtype).min - assert nan == 0 - return torch.clamp(input.unsqueeze(0).nansum(0), min=neginf, max=posinf, out=out) - -#---------------------------------------------------------------------------- -# Symbolic assert. - -try: - symbolic_assert = torch._assert # 1.8.0a0 # pylint: disable=protected-access -except AttributeError: - symbolic_assert = torch.Assert # 1.7.0 - -#---------------------------------------------------------------------------- -# Context manager to suppress known warnings in torch.jit.trace(). - -class suppress_tracer_warnings(warnings.catch_warnings): - def __enter__(self): - super().__enter__() - warnings.simplefilter('ignore', category=torch.jit.TracerWarning) - return self - -#---------------------------------------------------------------------------- -# Assert that the shape of a tensor matches the given list of integers. -# None indicates that the size of a dimension is allowed to vary. -# Performs symbolic assertion when used in torch.jit.trace(). - -def assert_shape(tensor, ref_shape): - if tensor.ndim != len(ref_shape): - raise AssertionError(f'Wrong number of dimensions: got {tensor.ndim}, expected {len(ref_shape)}') - for idx, (size, ref_size) in enumerate(zip(tensor.shape, ref_shape)): - if ref_size is None: - pass - elif isinstance(ref_size, torch.Tensor): - with suppress_tracer_warnings(): # as_tensor results are registered as constants - symbolic_assert(torch.equal(torch.as_tensor(size), ref_size), f'Wrong size for dimension {idx}') - elif isinstance(size, torch.Tensor): - with suppress_tracer_warnings(): # as_tensor results are registered as constants - symbolic_assert(torch.equal(size, torch.as_tensor(ref_size)), f'Wrong size for dimension {idx}: expected {ref_size}') - elif size != ref_size: - raise AssertionError(f'Wrong size for dimension {idx}: got {size}, expected {ref_size}') - -#---------------------------------------------------------------------------- -# Function decorator that calls torch.autograd.profiler.record_function(). - -def profiled_function(fn): - def decorator(*args, **kwargs): - with torch.autograd.profiler.record_function(fn.__name__): - return fn(*args, **kwargs) - decorator.__name__ = fn.__name__ - return decorator - -#---------------------------------------------------------------------------- -# Sampler for torch.utils.data.DataLoader that loops over the dataset -# indefinitely, shuffling items as it goes. - -class InfiniteSampler(torch.utils.data.Sampler): - def __init__(self, dataset, rank=0, num_replicas=1, shuffle=True, seed=0, window_size=0.5): - assert len(dataset) > 0 - assert num_replicas > 0 - assert 0 <= rank < num_replicas - assert 0 <= window_size <= 1 - super().__init__(dataset) - self.dataset = dataset - self.rank = rank - self.num_replicas = num_replicas - self.shuffle = shuffle - self.seed = seed - self.window_size = window_size - - def __iter__(self): - order = np.arange(len(self.dataset)) - rnd = None - window = 0 - if self.shuffle: - rnd = np.random.RandomState(self.seed) - rnd.shuffle(order) - window = int(np.rint(order.size * self.window_size)) - - idx = 0 - while True: - i = idx % order.size - if idx % self.num_replicas == self.rank: - yield order[i] - if window >= 2: - j = (i - rnd.randint(window)) % order.size - order[i], order[j] = order[j], order[i] - idx += 1 - -#---------------------------------------------------------------------------- -# Utilities for operating with torch.nn.Module parameters and buffers. - -def params_and_buffers(module): - assert isinstance(module, torch.nn.Module) - return list(module.parameters()) + list(module.buffers()) - -def named_params_and_buffers(module): - assert isinstance(module, torch.nn.Module) - return list(module.named_parameters()) + list(module.named_buffers()) - -def copy_params_and_buffers(src_module, dst_module, require_all=False): - assert isinstance(src_module, torch.nn.Module) - assert isinstance(dst_module, torch.nn.Module) - src_tensors = {name: tensor for name, tensor in named_params_and_buffers(src_module)} - for name, tensor in named_params_and_buffers(dst_module): - assert (name in src_tensors) or (not require_all) - if name in src_tensors: - tensor.copy_(src_tensors[name].detach()).requires_grad_(tensor.requires_grad) - -#---------------------------------------------------------------------------- -# Context manager for easily enabling/disabling DistributedDataParallel -# synchronization. - -@contextlib.contextmanager -def ddp_sync(module, sync): - assert isinstance(module, torch.nn.Module) - if sync or not isinstance(module, torch.nn.parallel.DistributedDataParallel): - yield - else: - with module.no_sync(): - yield - -#---------------------------------------------------------------------------- -# Check DistributedDataParallel consistency across processes. - -def check_ddp_consistency(module, ignore_regex=None): - assert isinstance(module, torch.nn.Module) - for name, tensor in named_params_and_buffers(module): - fullname = type(module).__name__ + '.' + name - if ignore_regex is not None and re.fullmatch(ignore_regex, fullname): - continue - tensor = tensor.detach() - other = tensor.clone() - torch.distributed.broadcast(tensor=other, src=0) - assert (nan_to_num(tensor) == nan_to_num(other)).all(), fullname - -#---------------------------------------------------------------------------- -# Print summary table of module hierarchy. - -def print_module_summary(module, inputs, max_nesting=3, skip_redundant=True): - assert isinstance(module, torch.nn.Module) - assert not isinstance(module, torch.jit.ScriptModule) - assert isinstance(inputs, (tuple, list)) - - # Register hooks. - entries = [] - nesting = [0] - def pre_hook(_mod, _inputs): - nesting[0] += 1 - def post_hook(mod, _inputs, outputs): - nesting[0] -= 1 - if nesting[0] <= max_nesting: - outputs = list(outputs) if isinstance(outputs, (tuple, list)) else [outputs] - outputs = [t for t in outputs if isinstance(t, torch.Tensor)] - entries.append(dnnlib.EasyDict(mod=mod, outputs=outputs)) - hooks = [mod.register_forward_pre_hook(pre_hook) for mod in module.modules()] - hooks += [mod.register_forward_hook(post_hook) for mod in module.modules()] - - # Run module. - outputs = module(*inputs) - for hook in hooks: - hook.remove() - - # Identify unique outputs, parameters, and buffers. - tensors_seen = set() - for e in entries: - e.unique_params = [t for t in e.mod.parameters() if id(t) not in tensors_seen] - e.unique_buffers = [t for t in e.mod.buffers() if id(t) not in tensors_seen] - e.unique_outputs = [t for t in e.outputs if id(t) not in tensors_seen] - tensors_seen |= {id(t) for t in e.unique_params + e.unique_buffers + e.unique_outputs} - - # Filter out redundant entries. - if skip_redundant: - entries = [e for e in entries if len(e.unique_params) or len(e.unique_buffers) or len(e.unique_outputs)] - - # Construct table. - rows = [[type(module).__name__, 'Parameters', 'Buffers', 'Output shape', 'Datatype']] - rows += [['---'] * len(rows[0])] - param_total = 0 - buffer_total = 0 - submodule_names = {mod: name for name, mod in module.named_modules()} - for e in entries: - name = '' if e.mod is module else submodule_names[e.mod] - param_size = sum(t.numel() for t in e.unique_params) - buffer_size = sum(t.numel() for t in e.unique_buffers) - output_shapes = [str(list(e.outputs[0].shape)) for t in e.outputs] - output_dtypes = [str(t.dtype).split('.')[-1] for t in e.outputs] - rows += [[ - name + (':0' if len(e.outputs) >= 2 else ''), - str(param_size) if param_size else '-', - str(buffer_size) if buffer_size else '-', - (output_shapes + ['-'])[0], - (output_dtypes + ['-'])[0], - ]] - for idx in range(1, len(e.outputs)): - rows += [[name + f':{idx}', '-', '-', output_shapes[idx], output_dtypes[idx]]] - param_total += param_size - buffer_total += buffer_size - rows += [['---'] * len(rows[0])] - rows += [['Total', str(param_total), str(buffer_total), '-', '-']] - - # Print table. - widths = [max(len(cell) for cell in column) for column in zip(*rows)] - print() - for row in rows: - print(' '.join(cell + ' ' * (width - len(cell)) for cell, width in zip(row, widths))) - print() - return outputs - -#---------------------------------------------------------------------------- diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/ko/using-diffusers/img2img.md b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/ko/using-diffusers/img2img.md deleted file mode 100644 index 32435603c91082a02b6c3acfac1a355bde8a0ca5..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/ko/using-diffusers/img2img.md +++ /dev/null @@ -1,100 +0,0 @@ - - -# 텍스트 기반 image-to-image 생성 - -[[Colab에서 열기]] - -[`StableDiffusionImg2ImgPipeline`]을 사용하면 텍스트 프롬프트와 시작 이미지를 전달하여 새 이미지 생성의 조건을 지정할 수 있습니다. - -시작하기 전에 필요한 라이브러리가 모두 설치되어 있는지 확인하세요: - -```bash -!pip install diffusers transformers ftfy accelerate -``` - -[`nitrosocke/Ghibli-Diffusion`](https://huggingface.co/nitrosocke/Ghibli-Diffusion)과 같은 사전학습된 stable diffusion 모델로 [`StableDiffusionImg2ImgPipeline`]을 생성하여 시작하세요. - - -```python -import torch -import requests -from PIL import Image -from io import BytesIO -from diffusers import StableDiffusionImg2ImgPipeline - -device = "cuda" -pipe = StableDiffusionImg2ImgPipeline.from_pretrained("nitrosocke/Ghibli-Diffusion", torch_dtype=torch.float16).to( - device -) -``` - -초기 이미지를 다운로드하고 사전 처리하여 파이프라인에 전달할 수 있습니다: - -```python -url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg" - -response = requests.get(url) -init_image = Image.open(BytesIO(response.content)).convert("RGB") -init_image.thumbnail((768, 768)) -init_image -``` - -
- -
- - - -💡 `strength`는 입력 이미지에 추가되는 노이즈의 양을 제어하는 0.0에서 1.0 사이의 값입니다. 1.0에 가까운 값은 다양한 변형을 허용하지만 입력 이미지와 의미적으로 일치하지 않는 이미지를 생성합니다. - - - -프롬프트를 정의하고(지브리 스타일(Ghibli-style)에 맞게 조정된 이 체크포인트의 경우 프롬프트 앞에 `ghibli style` 토큰을 붙여야 합니다) 파이프라인을 실행합니다: - -```python -prompt = "ghibli style, a fantasy landscape with castles" -generator = torch.Generator(device=device).manual_seed(1024) -image = pipe(prompt=prompt, image=init_image, strength=0.75, guidance_scale=7.5, generator=generator).images[0] -image -``` - -
- -
- -다른 스케줄러로 실험하여 출력에 어떤 영향을 미치는지 확인할 수도 있습니다: - -```python -from diffusers import LMSDiscreteScheduler - -lms = LMSDiscreteScheduler.from_config(pipe.scheduler.config) -pipe.scheduler = lms -generator = torch.Generator(device=device).manual_seed(1024) -image = pipe(prompt=prompt, image=init_image, strength=0.75, guidance_scale=7.5, generator=generator).images[0] -image -``` - -
- -
- -아래 공백을 확인하고 `strength` 값을 다르게 설정하여 이미지를 생성해 보세요. `strength`를 낮게 설정하면 원본 이미지와 더 유사한 이미지가 생성되는 것을 확인할 수 있습니다. - -자유롭게 스케줄러를 [`LMSDiscreteScheduler`]로 전환하여 출력에 어떤 영향을 미치는지 확인해 보세요. - - \ No newline at end of file diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/textual_inversion/textual_inversion.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/textual_inversion/textual_inversion.py deleted file mode 100644 index 515f3964088912e551d895abfcb1081ebc0f9b4b..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/textual_inversion/textual_inversion.py +++ /dev/null @@ -1,959 +0,0 @@ -#!/usr/bin/env python -# coding=utf-8 -# Copyright 2023 The HuggingFace Inc. team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and - -import argparse -import logging -import math -import os -import random -import shutil -import warnings -from pathlib import Path - -import numpy as np -import PIL -import torch -import torch.nn.functional as F -import torch.utils.checkpoint -import transformers -from accelerate import Accelerator -from accelerate.logging import get_logger -from accelerate.utils import ProjectConfiguration, set_seed -from huggingface_hub import create_repo, upload_folder - -# TODO: remove and import from diffusers.utils when the new version of diffusers is released -from packaging import version -from PIL import Image -from torch.utils.data import Dataset -from torchvision import transforms -from tqdm.auto import tqdm -from transformers import CLIPTextModel, CLIPTokenizer - -import diffusers -from diffusers import ( - AutoencoderKL, - DDPMScheduler, - DiffusionPipeline, - DPMSolverMultistepScheduler, - StableDiffusionPipeline, - UNet2DConditionModel, -) -from diffusers.optimization import get_scheduler -from diffusers.utils import check_min_version, is_wandb_available -from diffusers.utils.import_utils import is_xformers_available - - -if is_wandb_available(): - import wandb - -if version.parse(version.parse(PIL.__version__).base_version) >= version.parse("9.1.0"): - PIL_INTERPOLATION = { - "linear": PIL.Image.Resampling.BILINEAR, - "bilinear": PIL.Image.Resampling.BILINEAR, - "bicubic": PIL.Image.Resampling.BICUBIC, - "lanczos": PIL.Image.Resampling.LANCZOS, - "nearest": PIL.Image.Resampling.NEAREST, - } -else: - PIL_INTERPOLATION = { - "linear": PIL.Image.LINEAR, - "bilinear": PIL.Image.BILINEAR, - "bicubic": PIL.Image.BICUBIC, - "lanczos": PIL.Image.LANCZOS, - "nearest": PIL.Image.NEAREST, - } -# ------------------------------------------------------------------------------ - - -# Will error if the minimal version of diffusers is not installed. Remove at your own risks. -check_min_version("0.19.0") - -logger = get_logger(__name__) - - -def save_model_card(repo_id: str, images=None, base_model=str, repo_folder=None): - img_str = "" - for i, image in enumerate(images): - image.save(os.path.join(repo_folder, f"image_{i}.png")) - img_str += f"![img_{i}](./image_{i}.png)\n" - - yaml = f""" ---- -license: creativeml-openrail-m -base_model: {base_model} -tags: -- stable-diffusion -- stable-diffusion-diffusers -- text-to-image -- diffusers -- textual_inversion -inference: true ---- - """ - model_card = f""" -# Textual inversion text2image fine-tuning - {repo_id} -These are textual inversion adaption weights for {base_model}. You can find some example images in the following. \n -{img_str} -""" - with open(os.path.join(repo_folder, "README.md"), "w") as f: - f.write(yaml + model_card) - - -def log_validation(text_encoder, tokenizer, unet, vae, args, accelerator, weight_dtype, epoch): - logger.info( - f"Running validation... \n Generating {args.num_validation_images} images with prompt:" - f" {args.validation_prompt}." - ) - # create pipeline (note: unet and vae are loaded again in float32) - pipeline = DiffusionPipeline.from_pretrained( - args.pretrained_model_name_or_path, - text_encoder=accelerator.unwrap_model(text_encoder), - tokenizer=tokenizer, - unet=unet, - vae=vae, - safety_checker=None, - revision=args.revision, - torch_dtype=weight_dtype, - ) - pipeline.scheduler = DPMSolverMultistepScheduler.from_config(pipeline.scheduler.config) - pipeline = pipeline.to(accelerator.device) - pipeline.set_progress_bar_config(disable=True) - - # run inference - generator = None if args.seed is None else torch.Generator(device=accelerator.device).manual_seed(args.seed) - images = [] - for _ in range(args.num_validation_images): - with torch.autocast("cuda"): - image = pipeline(args.validation_prompt, num_inference_steps=25, generator=generator).images[0] - images.append(image) - - for tracker in accelerator.trackers: - if tracker.name == "tensorboard": - np_images = np.stack([np.asarray(img) for img in images]) - tracker.writer.add_images("validation", np_images, epoch, dataformats="NHWC") - if tracker.name == "wandb": - tracker.log( - { - "validation": [ - wandb.Image(image, caption=f"{i}: {args.validation_prompt}") for i, image in enumerate(images) - ] - } - ) - - del pipeline - torch.cuda.empty_cache() - return images - - -def save_progress(text_encoder, placeholder_token_ids, accelerator, args, save_path): - logger.info("Saving embeddings") - learned_embeds = ( - accelerator.unwrap_model(text_encoder) - .get_input_embeddings() - .weight[min(placeholder_token_ids) : max(placeholder_token_ids) + 1] - ) - learned_embeds_dict = {args.placeholder_token: learned_embeds.detach().cpu()} - torch.save(learned_embeds_dict, save_path) - - -def parse_args(): - parser = argparse.ArgumentParser(description="Simple example of a training script.") - parser.add_argument( - "--save_steps", - type=int, - default=500, - help="Save learned_embeds.bin every X updates steps.", - ) - parser.add_argument( - "--save_as_full_pipeline", - action="store_true", - help="Save the complete stable diffusion pipeline.", - ) - parser.add_argument( - "--num_vectors", - type=int, - default=1, - help="How many textual inversion vectors shall be used to learn the concept.", - ) - parser.add_argument( - "--pretrained_model_name_or_path", - type=str, - default=None, - required=True, - help="Path to pretrained model or model identifier from huggingface.co/models.", - ) - parser.add_argument( - "--revision", - type=str, - default=None, - required=False, - help="Revision of pretrained model identifier from huggingface.co/models.", - ) - parser.add_argument( - "--tokenizer_name", - type=str, - default=None, - help="Pretrained tokenizer name or path if not the same as model_name", - ) - parser.add_argument( - "--train_data_dir", type=str, default=None, required=True, help="A folder containing the training data." - ) - parser.add_argument( - "--placeholder_token", - type=str, - default=None, - required=True, - help="A token to use as a placeholder for the concept.", - ) - parser.add_argument( - "--initializer_token", type=str, default=None, required=True, help="A token to use as initializer word." - ) - parser.add_argument("--learnable_property", type=str, default="object", help="Choose between 'object' and 'style'") - parser.add_argument("--repeats", type=int, default=100, help="How many times to repeat the training data.") - parser.add_argument( - "--output_dir", - type=str, - default="text-inversion-model", - help="The output directory where the model predictions and checkpoints will be written.", - ) - parser.add_argument("--seed", type=int, default=None, help="A seed for reproducible training.") - parser.add_argument( - "--resolution", - type=int, - default=512, - help=( - "The resolution for input images, all the images in the train/validation dataset will be resized to this" - " resolution" - ), - ) - parser.add_argument( - "--center_crop", action="store_true", help="Whether to center crop images before resizing to resolution." - ) - parser.add_argument( - "--train_batch_size", type=int, default=16, help="Batch size (per device) for the training dataloader." - ) - parser.add_argument("--num_train_epochs", type=int, default=100) - parser.add_argument( - "--max_train_steps", - type=int, - default=5000, - help="Total number of training steps to perform. If provided, overrides num_train_epochs.", - ) - parser.add_argument( - "--gradient_accumulation_steps", - type=int, - default=1, - help="Number of updates steps to accumulate before performing a backward/update pass.", - ) - parser.add_argument( - "--gradient_checkpointing", - action="store_true", - help="Whether or not to use gradient checkpointing to save memory at the expense of slower backward pass.", - ) - parser.add_argument( - "--learning_rate", - type=float, - default=1e-4, - help="Initial learning rate (after the potential warmup period) to use.", - ) - parser.add_argument( - "--scale_lr", - action="store_true", - default=False, - help="Scale the learning rate by the number of GPUs, gradient accumulation steps, and batch size.", - ) - parser.add_argument( - "--lr_scheduler", - type=str, - default="constant", - help=( - 'The scheduler type to use. Choose between ["linear", "cosine", "cosine_with_restarts", "polynomial",' - ' "constant", "constant_with_warmup"]' - ), - ) - parser.add_argument( - "--lr_warmup_steps", type=int, default=500, help="Number of steps for the warmup in the lr scheduler." - ) - parser.add_argument( - "--lr_num_cycles", - type=int, - default=1, - help="Number of hard resets of the lr in cosine_with_restarts scheduler.", - ) - parser.add_argument( - "--dataloader_num_workers", - type=int, - default=0, - help=( - "Number of subprocesses to use for data loading. 0 means that the data will be loaded in the main process." - ), - ) - parser.add_argument("--adam_beta1", type=float, default=0.9, help="The beta1 parameter for the Adam optimizer.") - parser.add_argument("--adam_beta2", type=float, default=0.999, help="The beta2 parameter for the Adam optimizer.") - parser.add_argument("--adam_weight_decay", type=float, default=1e-2, help="Weight decay to use.") - parser.add_argument("--adam_epsilon", type=float, default=1e-08, help="Epsilon value for the Adam optimizer") - parser.add_argument("--push_to_hub", action="store_true", help="Whether or not to push the model to the Hub.") - parser.add_argument("--hub_token", type=str, default=None, help="The token to use to push to the Model Hub.") - parser.add_argument( - "--hub_model_id", - type=str, - default=None, - help="The name of the repository to keep in sync with the local `output_dir`.", - ) - parser.add_argument( - "--logging_dir", - type=str, - default="logs", - help=( - "[TensorBoard](https://www.tensorflow.org/tensorboard) log directory. Will default to" - " *output_dir/runs/**CURRENT_DATETIME_HOSTNAME***." - ), - ) - parser.add_argument( - "--mixed_precision", - type=str, - default="no", - choices=["no", "fp16", "bf16"], - help=( - "Whether to use mixed precision. Choose" - "between fp16 and bf16 (bfloat16). Bf16 requires PyTorch >= 1.10." - "and an Nvidia Ampere GPU." - ), - ) - parser.add_argument( - "--allow_tf32", - action="store_true", - help=( - "Whether or not to allow TF32 on Ampere GPUs. Can be used to speed up training. For more information, see" - " https://pytorch.org/docs/stable/notes/cuda.html#tensorfloat-32-tf32-on-ampere-devices" - ), - ) - parser.add_argument( - "--report_to", - type=str, - default="tensorboard", - help=( - 'The integration to report the results and logs to. Supported platforms are `"tensorboard"`' - ' (default), `"wandb"` and `"comet_ml"`. Use `"all"` to report to all integrations.' - ), - ) - parser.add_argument( - "--validation_prompt", - type=str, - default=None, - help="A prompt that is used during validation to verify that the model is learning.", - ) - parser.add_argument( - "--num_validation_images", - type=int, - default=4, - help="Number of images that should be generated during validation with `validation_prompt`.", - ) - parser.add_argument( - "--validation_steps", - type=int, - default=100, - help=( - "Run validation every X steps. Validation consists of running the prompt" - " `args.validation_prompt` multiple times: `args.num_validation_images`" - " and logging the images." - ), - ) - parser.add_argument( - "--validation_epochs", - type=int, - default=None, - help=( - "Deprecated in favor of validation_steps. Run validation every X epochs. Validation consists of running the prompt" - " `args.validation_prompt` multiple times: `args.num_validation_images`" - " and logging the images." - ), - ) - parser.add_argument("--local_rank", type=int, default=-1, help="For distributed training: local_rank") - parser.add_argument( - "--checkpointing_steps", - type=int, - default=500, - help=( - "Save a checkpoint of the training state every X updates. These checkpoints are only suitable for resuming" - " training using `--resume_from_checkpoint`." - ), - ) - parser.add_argument( - "--checkpoints_total_limit", - type=int, - default=None, - help=("Max number of checkpoints to store."), - ) - parser.add_argument( - "--resume_from_checkpoint", - type=str, - default=None, - help=( - "Whether training should be resumed from a previous checkpoint. Use a path saved by" - ' `--checkpointing_steps`, or `"latest"` to automatically select the last available checkpoint.' - ), - ) - parser.add_argument( - "--enable_xformers_memory_efficient_attention", action="store_true", help="Whether or not to use xformers." - ) - - args = parser.parse_args() - env_local_rank = int(os.environ.get("LOCAL_RANK", -1)) - if env_local_rank != -1 and env_local_rank != args.local_rank: - args.local_rank = env_local_rank - - if args.train_data_dir is None: - raise ValueError("You must specify a train data directory.") - - return args - - -imagenet_templates_small = [ - "a photo of a {}", - "a rendering of a {}", - "a cropped photo of the {}", - "the photo of a {}", - "a photo of a clean {}", - "a photo of a dirty {}", - "a dark photo of the {}", - "a photo of my {}", - "a photo of the cool {}", - "a close-up photo of a {}", - "a bright photo of the {}", - "a cropped photo of a {}", - "a photo of the {}", - "a good photo of the {}", - "a photo of one {}", - "a close-up photo of the {}", - "a rendition of the {}", - "a photo of the clean {}", - "a rendition of a {}", - "a photo of a nice {}", - "a good photo of a {}", - "a photo of the nice {}", - "a photo of the small {}", - "a photo of the weird {}", - "a photo of the large {}", - "a photo of a cool {}", - "a photo of a small {}", -] - -imagenet_style_templates_small = [ - "a painting in the style of {}", - "a rendering in the style of {}", - "a cropped painting in the style of {}", - "the painting in the style of {}", - "a clean painting in the style of {}", - "a dirty painting in the style of {}", - "a dark painting in the style of {}", - "a picture in the style of {}", - "a cool painting in the style of {}", - "a close-up painting in the style of {}", - "a bright painting in the style of {}", - "a cropped painting in the style of {}", - "a good painting in the style of {}", - "a close-up painting in the style of {}", - "a rendition in the style of {}", - "a nice painting in the style of {}", - "a small painting in the style of {}", - "a weird painting in the style of {}", - "a large painting in the style of {}", -] - - -class TextualInversionDataset(Dataset): - def __init__( - self, - data_root, - tokenizer, - learnable_property="object", # [object, style] - size=512, - repeats=100, - interpolation="bicubic", - flip_p=0.5, - set="train", - placeholder_token="*", - center_crop=False, - ): - self.data_root = data_root - self.tokenizer = tokenizer - self.learnable_property = learnable_property - self.size = size - self.placeholder_token = placeholder_token - self.center_crop = center_crop - self.flip_p = flip_p - - self.image_paths = [os.path.join(self.data_root, file_path) for file_path in os.listdir(self.data_root)] - - self.num_images = len(self.image_paths) - self._length = self.num_images - - if set == "train": - self._length = self.num_images * repeats - - self.interpolation = { - "linear": PIL_INTERPOLATION["linear"], - "bilinear": PIL_INTERPOLATION["bilinear"], - "bicubic": PIL_INTERPOLATION["bicubic"], - "lanczos": PIL_INTERPOLATION["lanczos"], - }[interpolation] - - self.templates = imagenet_style_templates_small if learnable_property == "style" else imagenet_templates_small - self.flip_transform = transforms.RandomHorizontalFlip(p=self.flip_p) - - def __len__(self): - return self._length - - def __getitem__(self, i): - example = {} - image = Image.open(self.image_paths[i % self.num_images]) - - if not image.mode == "RGB": - image = image.convert("RGB") - - placeholder_string = self.placeholder_token - text = random.choice(self.templates).format(placeholder_string) - - example["input_ids"] = self.tokenizer( - text, - padding="max_length", - truncation=True, - max_length=self.tokenizer.model_max_length, - return_tensors="pt", - ).input_ids[0] - - # default to score-sde preprocessing - img = np.array(image).astype(np.uint8) - - if self.center_crop: - crop = min(img.shape[0], img.shape[1]) - ( - h, - w, - ) = ( - img.shape[0], - img.shape[1], - ) - img = img[(h - crop) // 2 : (h + crop) // 2, (w - crop) // 2 : (w + crop) // 2] - - image = Image.fromarray(img) - image = image.resize((self.size, self.size), resample=self.interpolation) - - image = self.flip_transform(image) - image = np.array(image).astype(np.uint8) - image = (image / 127.5 - 1.0).astype(np.float32) - - example["pixel_values"] = torch.from_numpy(image).permute(2, 0, 1) - return example - - -def main(): - args = parse_args() - logging_dir = os.path.join(args.output_dir, args.logging_dir) - accelerator_project_config = ProjectConfiguration(project_dir=args.output_dir, logging_dir=logging_dir) - accelerator = Accelerator( - gradient_accumulation_steps=args.gradient_accumulation_steps, - mixed_precision=args.mixed_precision, - log_with=args.report_to, - project_config=accelerator_project_config, - ) - - if args.report_to == "wandb": - if not is_wandb_available(): - raise ImportError("Make sure to install wandb if you want to use it for logging during training.") - - # Make one log on every process with the configuration for debugging. - logging.basicConfig( - format="%(asctime)s - %(levelname)s - %(name)s - %(message)s", - datefmt="%m/%d/%Y %H:%M:%S", - level=logging.INFO, - ) - logger.info(accelerator.state, main_process_only=False) - if accelerator.is_local_main_process: - transformers.utils.logging.set_verbosity_warning() - diffusers.utils.logging.set_verbosity_info() - else: - transformers.utils.logging.set_verbosity_error() - diffusers.utils.logging.set_verbosity_error() - - # If passed along, set the training seed now. - if args.seed is not None: - set_seed(args.seed) - - # Handle the repository creation - if accelerator.is_main_process: - if args.output_dir is not None: - os.makedirs(args.output_dir, exist_ok=True) - - if args.push_to_hub: - repo_id = create_repo( - repo_id=args.hub_model_id or Path(args.output_dir).name, exist_ok=True, token=args.hub_token - ).repo_id - - # Load tokenizer - if args.tokenizer_name: - tokenizer = CLIPTokenizer.from_pretrained(args.tokenizer_name) - elif args.pretrained_model_name_or_path: - tokenizer = CLIPTokenizer.from_pretrained(args.pretrained_model_name_or_path, subfolder="tokenizer") - - # Load scheduler and models - noise_scheduler = DDPMScheduler.from_pretrained(args.pretrained_model_name_or_path, subfolder="scheduler") - text_encoder = CLIPTextModel.from_pretrained( - args.pretrained_model_name_or_path, subfolder="text_encoder", revision=args.revision - ) - vae = AutoencoderKL.from_pretrained(args.pretrained_model_name_or_path, subfolder="vae", revision=args.revision) - unet = UNet2DConditionModel.from_pretrained( - args.pretrained_model_name_or_path, subfolder="unet", revision=args.revision - ) - - # Add the placeholder token in tokenizer - placeholder_tokens = [args.placeholder_token] - - if args.num_vectors < 1: - raise ValueError(f"--num_vectors has to be larger or equal to 1, but is {args.num_vectors}") - - # add dummy tokens for multi-vector - additional_tokens = [] - for i in range(1, args.num_vectors): - additional_tokens.append(f"{args.placeholder_token}_{i}") - placeholder_tokens += additional_tokens - - num_added_tokens = tokenizer.add_tokens(placeholder_tokens) - if num_added_tokens != args.num_vectors: - raise ValueError( - f"The tokenizer already contains the token {args.placeholder_token}. Please pass a different" - " `placeholder_token` that is not already in the tokenizer." - ) - - # Convert the initializer_token, placeholder_token to ids - token_ids = tokenizer.encode(args.initializer_token, add_special_tokens=False) - # Check if initializer_token is a single token or a sequence of tokens - if len(token_ids) > 1: - raise ValueError("The initializer token must be a single token.") - - initializer_token_id = token_ids[0] - placeholder_token_ids = tokenizer.convert_tokens_to_ids(placeholder_tokens) - - # Resize the token embeddings as we are adding new special tokens to the tokenizer - text_encoder.resize_token_embeddings(len(tokenizer)) - - # Initialise the newly added placeholder token with the embeddings of the initializer token - token_embeds = text_encoder.get_input_embeddings().weight.data - with torch.no_grad(): - for token_id in placeholder_token_ids: - token_embeds[token_id] = token_embeds[initializer_token_id].clone() - - # Freeze vae and unet - vae.requires_grad_(False) - unet.requires_grad_(False) - # Freeze all parameters except for the token embeddings in text encoder - text_encoder.text_model.encoder.requires_grad_(False) - text_encoder.text_model.final_layer_norm.requires_grad_(False) - text_encoder.text_model.embeddings.position_embedding.requires_grad_(False) - - if args.gradient_checkpointing: - # Keep unet in train mode if we are using gradient checkpointing to save memory. - # The dropout cannot be != 0 so it doesn't matter if we are in eval or train mode. - unet.train() - text_encoder.gradient_checkpointing_enable() - unet.enable_gradient_checkpointing() - - if args.enable_xformers_memory_efficient_attention: - if is_xformers_available(): - import xformers - - xformers_version = version.parse(xformers.__version__) - if xformers_version == version.parse("0.0.16"): - logger.warn( - "xFormers 0.0.16 cannot be used for training in some GPUs. If you observe problems during training, please update xFormers to at least 0.0.17. See https://huggingface.co/docs/diffusers/main/en/optimization/xformers for more details." - ) - unet.enable_xformers_memory_efficient_attention() - else: - raise ValueError("xformers is not available. Make sure it is installed correctly") - - # Enable TF32 for faster training on Ampere GPUs, - # cf https://pytorch.org/docs/stable/notes/cuda.html#tensorfloat-32-tf32-on-ampere-devices - if args.allow_tf32: - torch.backends.cuda.matmul.allow_tf32 = True - - if args.scale_lr: - args.learning_rate = ( - args.learning_rate * args.gradient_accumulation_steps * args.train_batch_size * accelerator.num_processes - ) - - # Initialize the optimizer - optimizer = torch.optim.AdamW( - text_encoder.get_input_embeddings().parameters(), # only optimize the embeddings - lr=args.learning_rate, - betas=(args.adam_beta1, args.adam_beta2), - weight_decay=args.adam_weight_decay, - eps=args.adam_epsilon, - ) - - # Dataset and DataLoaders creation: - train_dataset = TextualInversionDataset( - data_root=args.train_data_dir, - tokenizer=tokenizer, - size=args.resolution, - placeholder_token=args.placeholder_token, - repeats=args.repeats, - learnable_property=args.learnable_property, - center_crop=args.center_crop, - set="train", - ) - train_dataloader = torch.utils.data.DataLoader( - train_dataset, batch_size=args.train_batch_size, shuffle=True, num_workers=args.dataloader_num_workers - ) - if args.validation_epochs is not None: - warnings.warn( - f"FutureWarning: You are doing logging with validation_epochs={args.validation_epochs}." - " Deprecated validation_epochs in favor of `validation_steps`" - f"Setting `args.validation_steps` to {args.validation_epochs * len(train_dataset)}", - FutureWarning, - stacklevel=2, - ) - args.validation_steps = args.validation_epochs * len(train_dataset) - - # Scheduler and math around the number of training steps. - overrode_max_train_steps = False - num_update_steps_per_epoch = math.ceil(len(train_dataloader) / args.gradient_accumulation_steps) - if args.max_train_steps is None: - args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch - overrode_max_train_steps = True - - lr_scheduler = get_scheduler( - args.lr_scheduler, - optimizer=optimizer, - num_warmup_steps=args.lr_warmup_steps * accelerator.num_processes, - num_training_steps=args.max_train_steps * accelerator.num_processes, - num_cycles=args.lr_num_cycles, - ) - - # Prepare everything with our `accelerator`. - text_encoder, optimizer, train_dataloader, lr_scheduler = accelerator.prepare( - text_encoder, optimizer, train_dataloader, lr_scheduler - ) - - # For mixed precision training we cast all non-trainable weigths (vae, non-lora text_encoder and non-lora unet) to half-precision - # as these weights are only used for inference, keeping weights in full precision is not required. - weight_dtype = torch.float32 - if accelerator.mixed_precision == "fp16": - weight_dtype = torch.float16 - elif accelerator.mixed_precision == "bf16": - weight_dtype = torch.bfloat16 - - # Move vae and unet to device and cast to weight_dtype - unet.to(accelerator.device, dtype=weight_dtype) - vae.to(accelerator.device, dtype=weight_dtype) - - # We need to recalculate our total training steps as the size of the training dataloader may have changed. - num_update_steps_per_epoch = math.ceil(len(train_dataloader) / args.gradient_accumulation_steps) - if overrode_max_train_steps: - args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch - # Afterwards we recalculate our number of training epochs - args.num_train_epochs = math.ceil(args.max_train_steps / num_update_steps_per_epoch) - - # We need to initialize the trackers we use, and also store our configuration. - # The trackers initializes automatically on the main process. - if accelerator.is_main_process: - accelerator.init_trackers("textual_inversion", config=vars(args)) - - # Train! - total_batch_size = args.train_batch_size * accelerator.num_processes * args.gradient_accumulation_steps - - logger.info("***** Running training *****") - logger.info(f" Num examples = {len(train_dataset)}") - logger.info(f" Num Epochs = {args.num_train_epochs}") - logger.info(f" Instantaneous batch size per device = {args.train_batch_size}") - logger.info(f" Total train batch size (w. parallel, distributed & accumulation) = {total_batch_size}") - logger.info(f" Gradient Accumulation steps = {args.gradient_accumulation_steps}") - logger.info(f" Total optimization steps = {args.max_train_steps}") - global_step = 0 - first_epoch = 0 - # Potentially load in the weights and states from a previous save - if args.resume_from_checkpoint: - if args.resume_from_checkpoint != "latest": - path = os.path.basename(args.resume_from_checkpoint) - else: - # Get the most recent checkpoint - dirs = os.listdir(args.output_dir) - dirs = [d for d in dirs if d.startswith("checkpoint")] - dirs = sorted(dirs, key=lambda x: int(x.split("-")[1])) - path = dirs[-1] if len(dirs) > 0 else None - - if path is None: - accelerator.print( - f"Checkpoint '{args.resume_from_checkpoint}' does not exist. Starting a new training run." - ) - args.resume_from_checkpoint = None - else: - accelerator.print(f"Resuming from checkpoint {path}") - accelerator.load_state(os.path.join(args.output_dir, path)) - global_step = int(path.split("-")[1]) - - resume_global_step = global_step * args.gradient_accumulation_steps - first_epoch = global_step // num_update_steps_per_epoch - resume_step = resume_global_step % (num_update_steps_per_epoch * args.gradient_accumulation_steps) - - # Only show the progress bar once on each machine. - progress_bar = tqdm(range(global_step, args.max_train_steps), disable=not accelerator.is_local_main_process) - progress_bar.set_description("Steps") - - # keep original embeddings as reference - orig_embeds_params = accelerator.unwrap_model(text_encoder).get_input_embeddings().weight.data.clone() - - for epoch in range(first_epoch, args.num_train_epochs): - text_encoder.train() - for step, batch in enumerate(train_dataloader): - # Skip steps until we reach the resumed step - if args.resume_from_checkpoint and epoch == first_epoch and step < resume_step: - if step % args.gradient_accumulation_steps == 0: - progress_bar.update(1) - continue - - with accelerator.accumulate(text_encoder): - # Convert images to latent space - latents = vae.encode(batch["pixel_values"].to(dtype=weight_dtype)).latent_dist.sample().detach() - latents = latents * vae.config.scaling_factor - - # Sample noise that we'll add to the latents - noise = torch.randn_like(latents) - bsz = latents.shape[0] - # Sample a random timestep for each image - timesteps = torch.randint(0, noise_scheduler.config.num_train_timesteps, (bsz,), device=latents.device) - timesteps = timesteps.long() - - # Add noise to the latents according to the noise magnitude at each timestep - # (this is the forward diffusion process) - noisy_latents = noise_scheduler.add_noise(latents, noise, timesteps) - - # Get the text embedding for conditioning - encoder_hidden_states = text_encoder(batch["input_ids"])[0].to(dtype=weight_dtype) - - # Predict the noise residual - model_pred = unet(noisy_latents, timesteps, encoder_hidden_states).sample - - # Get the target for loss depending on the prediction type - if noise_scheduler.config.prediction_type == "epsilon": - target = noise - elif noise_scheduler.config.prediction_type == "v_prediction": - target = noise_scheduler.get_velocity(latents, noise, timesteps) - else: - raise ValueError(f"Unknown prediction type {noise_scheduler.config.prediction_type}") - - loss = F.mse_loss(model_pred.float(), target.float(), reduction="mean") - - accelerator.backward(loss) - - optimizer.step() - lr_scheduler.step() - optimizer.zero_grad() - - # Let's make sure we don't update any embedding weights besides the newly added token - index_no_updates = torch.ones((len(tokenizer),), dtype=torch.bool) - index_no_updates[min(placeholder_token_ids) : max(placeholder_token_ids) + 1] = False - - with torch.no_grad(): - accelerator.unwrap_model(text_encoder).get_input_embeddings().weight[ - index_no_updates - ] = orig_embeds_params[index_no_updates] - - # Checks if the accelerator has performed an optimization step behind the scenes - if accelerator.sync_gradients: - images = [] - progress_bar.update(1) - global_step += 1 - if global_step % args.save_steps == 0: - save_path = os.path.join(args.output_dir, f"learned_embeds-steps-{global_step}.bin") - save_progress(text_encoder, placeholder_token_ids, accelerator, args, save_path) - - if accelerator.is_main_process: - if global_step % args.checkpointing_steps == 0: - # _before_ saving state, check if this save would set us over the `checkpoints_total_limit` - if args.checkpoints_total_limit is not None: - checkpoints = os.listdir(args.output_dir) - checkpoints = [d for d in checkpoints if d.startswith("checkpoint")] - checkpoints = sorted(checkpoints, key=lambda x: int(x.split("-")[1])) - - # before we save the new checkpoint, we need to have at _most_ `checkpoints_total_limit - 1` checkpoints - if len(checkpoints) >= args.checkpoints_total_limit: - num_to_remove = len(checkpoints) - args.checkpoints_total_limit + 1 - removing_checkpoints = checkpoints[0:num_to_remove] - - logger.info( - f"{len(checkpoints)} checkpoints already exist, removing {len(removing_checkpoints)} checkpoints" - ) - logger.info(f"removing checkpoints: {', '.join(removing_checkpoints)}") - - for removing_checkpoint in removing_checkpoints: - removing_checkpoint = os.path.join(args.output_dir, removing_checkpoint) - shutil.rmtree(removing_checkpoint) - - save_path = os.path.join(args.output_dir, f"checkpoint-{global_step}") - accelerator.save_state(save_path) - logger.info(f"Saved state to {save_path}") - - if args.validation_prompt is not None and global_step % args.validation_steps == 0: - images = log_validation( - text_encoder, tokenizer, unet, vae, args, accelerator, weight_dtype, epoch - ) - - logs = {"loss": loss.detach().item(), "lr": lr_scheduler.get_last_lr()[0]} - progress_bar.set_postfix(**logs) - accelerator.log(logs, step=global_step) - - if global_step >= args.max_train_steps: - break - # Create the pipeline using the trained modules and save it. - accelerator.wait_for_everyone() - if accelerator.is_main_process: - if args.push_to_hub and not args.save_as_full_pipeline: - logger.warn("Enabling full model saving because --push_to_hub=True was specified.") - save_full_model = True - else: - save_full_model = args.save_as_full_pipeline - if save_full_model: - pipeline = StableDiffusionPipeline.from_pretrained( - args.pretrained_model_name_or_path, - text_encoder=accelerator.unwrap_model(text_encoder), - vae=vae, - unet=unet, - tokenizer=tokenizer, - ) - pipeline.save_pretrained(args.output_dir) - # Save the newly trained embeddings - save_path = os.path.join(args.output_dir, "learned_embeds.bin") - save_progress(text_encoder, placeholder_token_ids, accelerator, args, save_path) - - if args.push_to_hub: - save_model_card( - repo_id, - images=images, - base_model=args.pretrained_model_name_or_path, - repo_folder=args.output_dir, - ) - upload_folder( - repo_id=repo_id, - folder_path=args.output_dir, - commit_message="End of training", - ignore_patterns=["step_*", "epoch_*"], - ) - - accelerator.end_training() - - -if __name__ == "__main__": - main() diff --git a/spaces/Andy1621/uniformer_image_detection/configs/hrnet/htc_hrnetv2p_w40_20e_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/hrnet/htc_hrnetv2p_w40_20e_coco.py deleted file mode 100644 index abf6fb550e4dfff4e749e15b001c37e6db8ae476..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/configs/hrnet/htc_hrnetv2p_w40_20e_coco.py +++ /dev/null @@ -1,10 +0,0 @@ -_base_ = './htc_hrnetv2p_w32_20e_coco.py' -model = dict( - pretrained='open-mmlab://msra/hrnetv2_w40', - backbone=dict( - type='HRNet', - extra=dict( - stage2=dict(num_channels=(40, 80)), - stage3=dict(num_channels=(40, 80, 160)), - stage4=dict(num_channels=(40, 80, 160, 320)))), - neck=dict(type='HRFPN', in_channels=[40, 80, 160, 320], out_channels=256)) diff --git a/spaces/Andy1621/uniformer_image_detection/mmdet/models/roi_heads/mask_heads/scnet_semantic_head.py b/spaces/Andy1621/uniformer_image_detection/mmdet/models/roi_heads/mask_heads/scnet_semantic_head.py deleted file mode 100644 index df85a0112d27d97301fff56189f99bee0bf8efa5..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/mmdet/models/roi_heads/mask_heads/scnet_semantic_head.py +++ /dev/null @@ -1,27 +0,0 @@ -from mmdet.models.builder import HEADS -from mmdet.models.utils import ResLayer, SimplifiedBasicBlock -from .fused_semantic_head import FusedSemanticHead - - -@HEADS.register_module() -class SCNetSemanticHead(FusedSemanticHead): - """Mask head for `SCNet `_. - - Args: - conv_to_res (bool, optional): if True, change the conv layers to - ``SimplifiedBasicBlock``. - """ - - def __init__(self, conv_to_res=True, **kwargs): - super(SCNetSemanticHead, self).__init__(**kwargs) - self.conv_to_res = conv_to_res - if self.conv_to_res: - num_res_blocks = self.num_convs // 2 - self.convs = ResLayer( - SimplifiedBasicBlock, - self.in_channels, - self.conv_out_channels, - num_res_blocks, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg) - self.num_convs = num_res_blocks diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/hrnet/fcn_hr48_480x480_80k_pascal_context_59.py b/spaces/Andy1621/uniformer_image_segmentation/configs/hrnet/fcn_hr48_480x480_80k_pascal_context_59.py deleted file mode 100644 index 012ad0a7d6119554ec00400ad18a09249a72eca4..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_segmentation/configs/hrnet/fcn_hr48_480x480_80k_pascal_context_59.py +++ /dev/null @@ -1,10 +0,0 @@ -_base_ = './fcn_hr18_480x480_80k_pascal_context_59.py' -model = dict( - pretrained='open-mmlab://msra/hrnetv2_w48', - backbone=dict( - extra=dict( - stage2=dict(num_channels=(48, 96)), - stage3=dict(num_channels=(48, 96, 192)), - stage4=dict(num_channels=(48, 96, 192, 384)))), - decode_head=dict( - in_channels=[48, 96, 192, 384], channels=sum([48, 96, 192, 384]))) diff --git a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv_custom/checkpoint.py b/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv_custom/checkpoint.py deleted file mode 100644 index 19b87fef0a52d31babcdb3edb8f3089b6420173f..0000000000000000000000000000000000000000 --- a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv_custom/checkpoint.py +++ /dev/null @@ -1,500 +0,0 @@ -# Copyright (c) Open-MMLab. All rights reserved. -import io -import os -import os.path as osp -import pkgutil -import time -import warnings -from collections import OrderedDict -from importlib import import_module -from tempfile import TemporaryDirectory - -import torch -import torchvision -from torch.optim import Optimizer -from torch.utils import model_zoo -from torch.nn import functional as F - -import annotator.uniformer.mmcv as mmcv -from annotator.uniformer.mmcv.fileio import FileClient -from annotator.uniformer.mmcv.fileio import load as load_file -from annotator.uniformer.mmcv.parallel import is_module_wrapper -from annotator.uniformer.mmcv.utils import mkdir_or_exist -from annotator.uniformer.mmcv.runner import get_dist_info - -ENV_MMCV_HOME = 'MMCV_HOME' -ENV_XDG_CACHE_HOME = 'XDG_CACHE_HOME' -DEFAULT_CACHE_DIR = '~/.cache' - - -def _get_mmcv_home(): - mmcv_home = os.path.expanduser( - os.getenv( - ENV_MMCV_HOME, - os.path.join( - os.getenv(ENV_XDG_CACHE_HOME, DEFAULT_CACHE_DIR), 'mmcv'))) - - mkdir_or_exist(mmcv_home) - return mmcv_home - - -def load_state_dict(module, state_dict, strict=False, logger=None): - """Load state_dict to a module. - - This method is modified from :meth:`torch.nn.Module.load_state_dict`. - Default value for ``strict`` is set to ``False`` and the message for - param mismatch will be shown even if strict is False. - - Args: - module (Module): Module that receives the state_dict. - state_dict (OrderedDict): Weights. - strict (bool): whether to strictly enforce that the keys - in :attr:`state_dict` match the keys returned by this module's - :meth:`~torch.nn.Module.state_dict` function. Default: ``False``. - logger (:obj:`logging.Logger`, optional): Logger to log the error - message. If not specified, print function will be used. - """ - unexpected_keys = [] - all_missing_keys = [] - err_msg = [] - - metadata = getattr(state_dict, '_metadata', None) - state_dict = state_dict.copy() - if metadata is not None: - state_dict._metadata = metadata - - # use _load_from_state_dict to enable checkpoint version control - def load(module, prefix=''): - # recursively check parallel module in case that the model has a - # complicated structure, e.g., nn.Module(nn.Module(DDP)) - if is_module_wrapper(module): - module = module.module - local_metadata = {} if metadata is None else metadata.get( - prefix[:-1], {}) - module._load_from_state_dict(state_dict, prefix, local_metadata, True, - all_missing_keys, unexpected_keys, - err_msg) - for name, child in module._modules.items(): - if child is not None: - load(child, prefix + name + '.') - - load(module) - load = None # break load->load reference cycle - - # ignore "num_batches_tracked" of BN layers - missing_keys = [ - key for key in all_missing_keys if 'num_batches_tracked' not in key - ] - - if unexpected_keys: - err_msg.append('unexpected key in source ' - f'state_dict: {", ".join(unexpected_keys)}\n') - if missing_keys: - err_msg.append( - f'missing keys in source state_dict: {", ".join(missing_keys)}\n') - - rank, _ = get_dist_info() - if len(err_msg) > 0 and rank == 0: - err_msg.insert( - 0, 'The model and loaded state dict do not match exactly\n') - err_msg = '\n'.join(err_msg) - if strict: - raise RuntimeError(err_msg) - elif logger is not None: - logger.warning(err_msg) - else: - print(err_msg) - - -def load_url_dist(url, model_dir=None): - """In distributed setting, this function only download checkpoint at local - rank 0.""" - rank, world_size = get_dist_info() - rank = int(os.environ.get('LOCAL_RANK', rank)) - if rank == 0: - checkpoint = model_zoo.load_url(url, model_dir=model_dir) - if world_size > 1: - torch.distributed.barrier() - if rank > 0: - checkpoint = model_zoo.load_url(url, model_dir=model_dir) - return checkpoint - - -def load_pavimodel_dist(model_path, map_location=None): - """In distributed setting, this function only download checkpoint at local - rank 0.""" - try: - from pavi import modelcloud - except ImportError: - raise ImportError( - 'Please install pavi to load checkpoint from modelcloud.') - rank, world_size = get_dist_info() - rank = int(os.environ.get('LOCAL_RANK', rank)) - if rank == 0: - model = modelcloud.get(model_path) - with TemporaryDirectory() as tmp_dir: - downloaded_file = osp.join(tmp_dir, model.name) - model.download(downloaded_file) - checkpoint = torch.load(downloaded_file, map_location=map_location) - if world_size > 1: - torch.distributed.barrier() - if rank > 0: - model = modelcloud.get(model_path) - with TemporaryDirectory() as tmp_dir: - downloaded_file = osp.join(tmp_dir, model.name) - model.download(downloaded_file) - checkpoint = torch.load( - downloaded_file, map_location=map_location) - return checkpoint - - -def load_fileclient_dist(filename, backend, map_location): - """In distributed setting, this function only download checkpoint at local - rank 0.""" - rank, world_size = get_dist_info() - rank = int(os.environ.get('LOCAL_RANK', rank)) - allowed_backends = ['ceph'] - if backend not in allowed_backends: - raise ValueError(f'Load from Backend {backend} is not supported.') - if rank == 0: - fileclient = FileClient(backend=backend) - buffer = io.BytesIO(fileclient.get(filename)) - checkpoint = torch.load(buffer, map_location=map_location) - if world_size > 1: - torch.distributed.barrier() - if rank > 0: - fileclient = FileClient(backend=backend) - buffer = io.BytesIO(fileclient.get(filename)) - checkpoint = torch.load(buffer, map_location=map_location) - return checkpoint - - -def get_torchvision_models(): - model_urls = dict() - for _, name, ispkg in pkgutil.walk_packages(torchvision.models.__path__): - if ispkg: - continue - _zoo = import_module(f'torchvision.models.{name}') - if hasattr(_zoo, 'model_urls'): - _urls = getattr(_zoo, 'model_urls') - model_urls.update(_urls) - return model_urls - - -def get_external_models(): - mmcv_home = _get_mmcv_home() - default_json_path = osp.join(mmcv.__path__[0], 'model_zoo/open_mmlab.json') - default_urls = load_file(default_json_path) - assert isinstance(default_urls, dict) - external_json_path = osp.join(mmcv_home, 'open_mmlab.json') - if osp.exists(external_json_path): - external_urls = load_file(external_json_path) - assert isinstance(external_urls, dict) - default_urls.update(external_urls) - - return default_urls - - -def get_mmcls_models(): - mmcls_json_path = osp.join(mmcv.__path__[0], 'model_zoo/mmcls.json') - mmcls_urls = load_file(mmcls_json_path) - - return mmcls_urls - - -def get_deprecated_model_names(): - deprecate_json_path = osp.join(mmcv.__path__[0], - 'model_zoo/deprecated.json') - deprecate_urls = load_file(deprecate_json_path) - assert isinstance(deprecate_urls, dict) - - return deprecate_urls - - -def _process_mmcls_checkpoint(checkpoint): - state_dict = checkpoint['state_dict'] - new_state_dict = OrderedDict() - for k, v in state_dict.items(): - if k.startswith('backbone.'): - new_state_dict[k[9:]] = v - new_checkpoint = dict(state_dict=new_state_dict) - - return new_checkpoint - - -def _load_checkpoint(filename, map_location=None): - """Load checkpoint from somewhere (modelzoo, file, url). - - Args: - filename (str): Accept local filepath, URL, ``torchvision://xxx``, - ``open-mmlab://xxx``. Please refer to ``docs/model_zoo.md`` for - details. - map_location (str | None): Same as :func:`torch.load`. Default: None. - - Returns: - dict | OrderedDict: The loaded checkpoint. It can be either an - OrderedDict storing model weights or a dict containing other - information, which depends on the checkpoint. - """ - if filename.startswith('modelzoo://'): - warnings.warn('The URL scheme of "modelzoo://" is deprecated, please ' - 'use "torchvision://" instead') - model_urls = get_torchvision_models() - model_name = filename[11:] - checkpoint = load_url_dist(model_urls[model_name]) - elif filename.startswith('torchvision://'): - model_urls = get_torchvision_models() - model_name = filename[14:] - checkpoint = load_url_dist(model_urls[model_name]) - elif filename.startswith('open-mmlab://'): - model_urls = get_external_models() - model_name = filename[13:] - deprecated_urls = get_deprecated_model_names() - if model_name in deprecated_urls: - warnings.warn(f'open-mmlab://{model_name} is deprecated in favor ' - f'of open-mmlab://{deprecated_urls[model_name]}') - model_name = deprecated_urls[model_name] - model_url = model_urls[model_name] - # check if is url - if model_url.startswith(('http://', 'https://')): - checkpoint = load_url_dist(model_url) - else: - filename = osp.join(_get_mmcv_home(), model_url) - if not osp.isfile(filename): - raise IOError(f'{filename} is not a checkpoint file') - checkpoint = torch.load(filename, map_location=map_location) - elif filename.startswith('mmcls://'): - model_urls = get_mmcls_models() - model_name = filename[8:] - checkpoint = load_url_dist(model_urls[model_name]) - checkpoint = _process_mmcls_checkpoint(checkpoint) - elif filename.startswith(('http://', 'https://')): - checkpoint = load_url_dist(filename) - elif filename.startswith('pavi://'): - model_path = filename[7:] - checkpoint = load_pavimodel_dist(model_path, map_location=map_location) - elif filename.startswith('s3://'): - checkpoint = load_fileclient_dist( - filename, backend='ceph', map_location=map_location) - else: - if not osp.isfile(filename): - raise IOError(f'{filename} is not a checkpoint file') - checkpoint = torch.load(filename, map_location=map_location) - return checkpoint - - -def load_checkpoint(model, - filename, - map_location='cpu', - strict=False, - logger=None): - """Load checkpoint from a file or URI. - - Args: - model (Module): Module to load checkpoint. - filename (str): Accept local filepath, URL, ``torchvision://xxx``, - ``open-mmlab://xxx``. Please refer to ``docs/model_zoo.md`` for - details. - map_location (str): Same as :func:`torch.load`. - strict (bool): Whether to allow different params for the model and - checkpoint. - logger (:mod:`logging.Logger` or None): The logger for error message. - - Returns: - dict or OrderedDict: The loaded checkpoint. - """ - checkpoint = _load_checkpoint(filename, map_location) - # OrderedDict is a subclass of dict - if not isinstance(checkpoint, dict): - raise RuntimeError( - f'No state_dict found in checkpoint file {filename}') - # get state_dict from checkpoint - if 'state_dict' in checkpoint: - state_dict = checkpoint['state_dict'] - elif 'model' in checkpoint: - state_dict = checkpoint['model'] - else: - state_dict = checkpoint - # strip prefix of state_dict - if list(state_dict.keys())[0].startswith('module.'): - state_dict = {k[7:]: v for k, v in state_dict.items()} - - # for MoBY, load model of online branch - if sorted(list(state_dict.keys()))[0].startswith('encoder'): - state_dict = {k.replace('encoder.', ''): v for k, v in state_dict.items() if k.startswith('encoder.')} - - # reshape absolute position embedding - if state_dict.get('absolute_pos_embed') is not None: - absolute_pos_embed = state_dict['absolute_pos_embed'] - N1, L, C1 = absolute_pos_embed.size() - N2, C2, H, W = model.absolute_pos_embed.size() - if N1 != N2 or C1 != C2 or L != H*W: - logger.warning("Error in loading absolute_pos_embed, pass") - else: - state_dict['absolute_pos_embed'] = absolute_pos_embed.view(N2, H, W, C2).permute(0, 3, 1, 2) - - # interpolate position bias table if needed - relative_position_bias_table_keys = [k for k in state_dict.keys() if "relative_position_bias_table" in k] - for table_key in relative_position_bias_table_keys: - table_pretrained = state_dict[table_key] - table_current = model.state_dict()[table_key] - L1, nH1 = table_pretrained.size() - L2, nH2 = table_current.size() - if nH1 != nH2: - logger.warning(f"Error in loading {table_key}, pass") - else: - if L1 != L2: - S1 = int(L1 ** 0.5) - S2 = int(L2 ** 0.5) - table_pretrained_resized = F.interpolate( - table_pretrained.permute(1, 0).view(1, nH1, S1, S1), - size=(S2, S2), mode='bicubic') - state_dict[table_key] = table_pretrained_resized.view(nH2, L2).permute(1, 0) - - # load state_dict - load_state_dict(model, state_dict, strict, logger) - return checkpoint - - -def weights_to_cpu(state_dict): - """Copy a model state_dict to cpu. - - Args: - state_dict (OrderedDict): Model weights on GPU. - - Returns: - OrderedDict: Model weights on GPU. - """ - state_dict_cpu = OrderedDict() - for key, val in state_dict.items(): - state_dict_cpu[key] = val.cpu() - return state_dict_cpu - - -def _save_to_state_dict(module, destination, prefix, keep_vars): - """Saves module state to `destination` dictionary. - - This method is modified from :meth:`torch.nn.Module._save_to_state_dict`. - - Args: - module (nn.Module): The module to generate state_dict. - destination (dict): A dict where state will be stored. - prefix (str): The prefix for parameters and buffers used in this - module. - """ - for name, param in module._parameters.items(): - if param is not None: - destination[prefix + name] = param if keep_vars else param.detach() - for name, buf in module._buffers.items(): - # remove check of _non_persistent_buffers_set to allow nn.BatchNorm2d - if buf is not None: - destination[prefix + name] = buf if keep_vars else buf.detach() - - -def get_state_dict(module, destination=None, prefix='', keep_vars=False): - """Returns a dictionary containing a whole state of the module. - - Both parameters and persistent buffers (e.g. running averages) are - included. Keys are corresponding parameter and buffer names. - - This method is modified from :meth:`torch.nn.Module.state_dict` to - recursively check parallel module in case that the model has a complicated - structure, e.g., nn.Module(nn.Module(DDP)). - - Args: - module (nn.Module): The module to generate state_dict. - destination (OrderedDict): Returned dict for the state of the - module. - prefix (str): Prefix of the key. - keep_vars (bool): Whether to keep the variable property of the - parameters. Default: False. - - Returns: - dict: A dictionary containing a whole state of the module. - """ - # recursively check parallel module in case that the model has a - # complicated structure, e.g., nn.Module(nn.Module(DDP)) - if is_module_wrapper(module): - module = module.module - - # below is the same as torch.nn.Module.state_dict() - if destination is None: - destination = OrderedDict() - destination._metadata = OrderedDict() - destination._metadata[prefix[:-1]] = local_metadata = dict( - version=module._version) - _save_to_state_dict(module, destination, prefix, keep_vars) - for name, child in module._modules.items(): - if child is not None: - get_state_dict( - child, destination, prefix + name + '.', keep_vars=keep_vars) - for hook in module._state_dict_hooks.values(): - hook_result = hook(module, destination, prefix, local_metadata) - if hook_result is not None: - destination = hook_result - return destination - - -def save_checkpoint(model, filename, optimizer=None, meta=None): - """Save checkpoint to file. - - The checkpoint will have 3 fields: ``meta``, ``state_dict`` and - ``optimizer``. By default ``meta`` will contain version and time info. - - Args: - model (Module): Module whose params are to be saved. - filename (str): Checkpoint filename. - optimizer (:obj:`Optimizer`, optional): Optimizer to be saved. - meta (dict, optional): Metadata to be saved in checkpoint. - """ - if meta is None: - meta = {} - elif not isinstance(meta, dict): - raise TypeError(f'meta must be a dict or None, but got {type(meta)}') - meta.update(mmcv_version=mmcv.__version__, time=time.asctime()) - - if is_module_wrapper(model): - model = model.module - - if hasattr(model, 'CLASSES') and model.CLASSES is not None: - # save class name to the meta - meta.update(CLASSES=model.CLASSES) - - checkpoint = { - 'meta': meta, - 'state_dict': weights_to_cpu(get_state_dict(model)) - } - # save optimizer state dict in the checkpoint - if isinstance(optimizer, Optimizer): - checkpoint['optimizer'] = optimizer.state_dict() - elif isinstance(optimizer, dict): - checkpoint['optimizer'] = {} - for name, optim in optimizer.items(): - checkpoint['optimizer'][name] = optim.state_dict() - - if filename.startswith('pavi://'): - try: - from pavi import modelcloud - from pavi.exception import NodeNotFoundError - except ImportError: - raise ImportError( - 'Please install pavi to load checkpoint from modelcloud.') - model_path = filename[7:] - root = modelcloud.Folder() - model_dir, model_name = osp.split(model_path) - try: - model = modelcloud.get(model_dir) - except NodeNotFoundError: - model = root.create_training_model(model_dir) - with TemporaryDirectory() as tmp_dir: - checkpoint_file = osp.join(tmp_dir, model_name) - with open(checkpoint_file, 'wb') as f: - torch.save(checkpoint, f) - f.flush() - model.create_file(checkpoint_file, name=model_name) - else: - mmcv.mkdir_or_exist(osp.dirname(filename)) - # immediately flush buffer - with open(filename, 'wb') as f: - torch.save(checkpoint, f) - f.flush() \ No newline at end of file diff --git a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmseg/core/seg/__init__.py b/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmseg/core/seg/__init__.py deleted file mode 100644 index 93bc129b685e4a3efca2cc891729981b2865900d..0000000000000000000000000000000000000000 --- a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmseg/core/seg/__init__.py +++ /dev/null @@ -1,4 +0,0 @@ -from .builder import build_pixel_sampler -from .sampler import BasePixelSampler, OHEMPixelSampler - -__all__ = ['build_pixel_sampler', 'BasePixelSampler', 'OHEMPixelSampler'] diff --git a/spaces/Anonymous-sub/Rerender/gmflow_module/utils/utils.py b/spaces/Anonymous-sub/Rerender/gmflow_module/utils/utils.py deleted file mode 100644 index 76f5518b7e5b769527907b31a1c1c00ba6cfe4f1..0000000000000000000000000000000000000000 --- a/spaces/Anonymous-sub/Rerender/gmflow_module/utils/utils.py +++ /dev/null @@ -1,58 +0,0 @@ -import torch -import torch.nn.functional as F - - -class InputPadder: - """ Pads images such that dimensions are divisible by 8 """ - - def __init__(self, dims, mode='sintel', padding_factor=8): - self.ht, self.wd = dims[-2:] - pad_ht = (((self.ht // padding_factor) + 1) * padding_factor - self.ht) % padding_factor - pad_wd = (((self.wd // padding_factor) + 1) * padding_factor - self.wd) % padding_factor - if mode == 'sintel': - self._pad = [pad_wd // 2, pad_wd - pad_wd // 2, pad_ht // 2, pad_ht - pad_ht // 2] - else: - self._pad = [pad_wd // 2, pad_wd - pad_wd // 2, 0, pad_ht] - - def pad(self, *inputs): - return [F.pad(x, self._pad, mode='replicate') for x in inputs] - - def unpad(self, x): - ht, wd = x.shape[-2:] - c = [self._pad[2], ht - self._pad[3], self._pad[0], wd - self._pad[1]] - return x[..., c[0]:c[1], c[2]:c[3]] - - -def coords_grid(batch, ht, wd, normalize=False): - if normalize: # [-1, 1] - coords = torch.meshgrid(2 * torch.arange(ht) / (ht - 1) - 1, - 2 * torch.arange(wd) / (wd - 1) - 1) - else: - coords = torch.meshgrid(torch.arange(ht), torch.arange(wd)) - coords = torch.stack(coords[::-1], dim=0).float() - return coords[None].repeat(batch, 1, 1, 1) # [B, 2, H, W] - - -def compute_out_of_boundary_mask(flow): - # flow: [B, 2, H, W] - assert flow.dim() == 4 and flow.size(1) == 2 - b, _, h, w = flow.shape - init_coords = coords_grid(b, h, w).to(flow.device) - corres = init_coords + flow # [B, 2, H, W] - - max_w = w - 1 - max_h = h - 1 - - valid_mask = (corres[:, 0] >= 0) & (corres[:, 0] <= max_w) & (corres[:, 1] >= 0) & (corres[:, 1] <= max_h) - - # in case very large flow - flow_mask = (flow[:, 0].abs() <= max_w) & (flow[:, 1].abs() <= max_h) - - valid_mask = valid_mask & flow_mask - - return valid_mask # [B, H, W] - - -def count_parameters(model): - num = sum(p.numel() for p in model.parameters() if p.requires_grad) - return num diff --git a/spaces/Anuj-Panthri/imdb_review_sentiment/app.py b/spaces/Anuj-Panthri/imdb_review_sentiment/app.py deleted file mode 100644 index f62af7f2e28e44459d96595e669facfe79977c0e..0000000000000000000000000000000000000000 --- a/spaces/Anuj-Panthri/imdb_review_sentiment/app.py +++ /dev/null @@ -1,20 +0,0 @@ -import gradio as gr -from fastai.text.all import * - -# to fix : NotImplementedError: cannot instantiate 'PosixPath' on your system -# import pathlib -# temp = pathlib.PosixPath -# pathlib.PosixPath = pathlib.WindowsPath - -examples=['This was a fantastic end to the trilogy.','I\'ve never seen a bigger waste of my time.','Just when we thought they couldn\'t possibly make a worse TV movie than Sharknado? Syfy says, "Hold my beer!"'] - -learn=load_learner('imdb_review_sentiment_model.pkl') - -class_names=['neg','pos'] - -def classify(review): - _,_,pob=learn.predict(review) - return dict(zip(class_names,map(float,pob))) - -iface = gr.Interface(fn=classify, inputs=gr.inputs.Textbox(), outputs=gr.outputs.Label(),examples=examples) -iface.launch() \ No newline at end of file diff --git a/spaces/Arsenii2023/Demo1/README.md b/spaces/Arsenii2023/Demo1/README.md deleted file mode 100644 index 9d26e58744b1a197da22c0b75888a29339707623..0000000000000000000000000000000000000000 --- a/spaces/Arsenii2023/Demo1/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Demo1 -emoji: 🏆 -colorFrom: pink -colorTo: blue -sdk: gradio -sdk_version: 3.47.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Artples/LLaMA-2-CHAT/README.md b/spaces/Artples/LLaMA-2-CHAT/README.md deleted file mode 100644 index aa3435b74da11de768e9c38188fd84133871604f..0000000000000000000000000000000000000000 --- a/spaces/Artples/LLaMA-2-CHAT/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: LLaMA-2-CHAT -emoji: ⚡ -colorFrom: gray -colorTo: yellow -sdk: gradio -sdk_version: 3.37.0 -app_file: app.py -pinned: true -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/__main__.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/__main__.py deleted file mode 100644 index fe34a7b7772cef55f5b5cb3455a2850489620ca7..0000000000000000000000000000000000000000 --- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/__main__.py +++ /dev/null @@ -1,31 +0,0 @@ -import os -import sys -import warnings - -# Remove '' and current working directory from the first entry -# of sys.path, if present to avoid using current directory -# in pip commands check, freeze, install, list and show, -# when invoked as python -m pip -if sys.path[0] in ("", os.getcwd()): - sys.path.pop(0) - -# If we are running from a wheel, add the wheel to sys.path -# This allows the usage python pip-*.whl/pip install pip-*.whl -if __package__ == "": - # __file__ is pip-*.whl/pip/__main__.py - # first dirname call strips of '/__main__.py', second strips off '/pip' - # Resulting path is the name of the wheel itself - # Add that to sys.path so we can import pip - path = os.path.dirname(os.path.dirname(__file__)) - sys.path.insert(0, path) - -if __name__ == "__main__": - # Work around the error reported in #9540, pending a proper fix. - # Note: It is essential the warning filter is set *before* importing - # pip, as the deprecation happens at import time, not runtime. - warnings.filterwarnings( - "ignore", category=DeprecationWarning, module=".*packaging\\.version" - ) - from pip._internal.cli.main import main as _main - - sys.exit(_main()) diff --git a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/export/caffe2_modeling.py b/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/export/caffe2_modeling.py deleted file mode 100644 index e00de4ad28fd81483c9e1161394b7b508fdad91f..0000000000000000000000000000000000000000 --- a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/export/caffe2_modeling.py +++ /dev/null @@ -1,419 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. - -import functools -import io -import struct -import types -import torch - -from detectron2.modeling import meta_arch -from detectron2.modeling.box_regression import Box2BoxTransform -from detectron2.modeling.roi_heads import keypoint_head -from detectron2.structures import Boxes, ImageList, Instances, RotatedBoxes - -from .c10 import Caffe2Compatible -from .caffe2_patch import ROIHeadsPatcher, patch_generalized_rcnn -from .shared import ( - alias, - check_set_pb_arg, - get_pb_arg_floats, - get_pb_arg_valf, - get_pb_arg_vali, - get_pb_arg_vals, - mock_torch_nn_functional_interpolate, -) - - -def assemble_rcnn_outputs_by_name(image_sizes, tensor_outputs, force_mask_on=False): - """ - A function to assemble caffe2 model's outputs (i.e. Dict[str, Tensor]) - to detectron2's format (i.e. list of Instances instance). - This only works when the model follows the Caffe2 detectron's naming convention. - - Args: - image_sizes (List[List[int, int]]): [H, W] of every image. - tensor_outputs (Dict[str, Tensor]): external_output to its tensor. - - force_mask_on (Bool): if true, the it make sure there'll be pred_masks even - if the mask is not found from tensor_outputs (usually due to model crash) - """ - - results = [Instances(image_size) for image_size in image_sizes] - - batch_splits = tensor_outputs.get("batch_splits", None) - if batch_splits: - raise NotImplementedError() - assert len(image_sizes) == 1 - result = results[0] - - bbox_nms = tensor_outputs["bbox_nms"] - score_nms = tensor_outputs["score_nms"] - class_nms = tensor_outputs["class_nms"] - # Detection will always success because Conv support 0-batch - assert bbox_nms is not None - assert score_nms is not None - assert class_nms is not None - if bbox_nms.shape[1] == 5: - result.pred_boxes = RotatedBoxes(bbox_nms) - else: - result.pred_boxes = Boxes(bbox_nms) - result.scores = score_nms - result.pred_classes = class_nms.to(torch.int64) - - mask_fcn_probs = tensor_outputs.get("mask_fcn_probs", None) - if mask_fcn_probs is not None: - # finish the mask pred - mask_probs_pred = mask_fcn_probs - num_masks = mask_probs_pred.shape[0] - class_pred = result.pred_classes - indices = torch.arange(num_masks, device=class_pred.device) - mask_probs_pred = mask_probs_pred[indices, class_pred][:, None] - result.pred_masks = mask_probs_pred - elif force_mask_on: - # NOTE: there's no way to know the height/width of mask here, it won't be - # used anyway when batch size is 0, so just set them to 0. - result.pred_masks = torch.zeros([0, 1, 0, 0], dtype=torch.uint8) - - keypoints_out = tensor_outputs.get("keypoints_out", None) - kps_score = tensor_outputs.get("kps_score", None) - if keypoints_out is not None: - # keypoints_out: [N, 4, #kypoints], where 4 is in order of (x, y, score, prob) - keypoints_tensor = keypoints_out - # NOTE: it's possible that prob is not calculated if "should_output_softmax" - # is set to False in HeatmapMaxKeypoint, so just using raw score, seems - # it doesn't affect mAP. TODO: check more carefully. - keypoint_xyp = keypoints_tensor.transpose(1, 2)[:, :, [0, 1, 2]] - result.pred_keypoints = keypoint_xyp - elif kps_score is not None: - # keypoint heatmap to sparse data structure - pred_keypoint_logits = kps_score - keypoint_head.keypoint_rcnn_inference(pred_keypoint_logits, [result]) - - return results - - -def _cast_to_f32(f64): - return struct.unpack("f", struct.pack("f", f64))[0] - - -def set_caffe2_compatible_tensor_mode(model, enable=True): - def _fn(m): - if isinstance(m, Caffe2Compatible): - m.tensor_mode = enable - - model.apply(_fn) - - -def convert_batched_inputs_to_c2_format(batched_inputs, size_divisibility, device): - """ - See get_caffe2_inputs() below. - """ - assert all(isinstance(x, dict) for x in batched_inputs) - assert all(x["image"].dim() == 3 for x in batched_inputs) - - images = [x["image"] for x in batched_inputs] - images = ImageList.from_tensors(images, size_divisibility) - - im_info = [] - for input_per_image, image_size in zip(batched_inputs, images.image_sizes): - target_height = input_per_image.get("height", image_size[0]) - target_width = input_per_image.get("width", image_size[1]) # noqa - # NOTE: The scale inside im_info is kept as convention and for providing - # post-processing information if further processing is needed. For - # current Caffe2 model definitions that don't include post-processing inside - # the model, this number is not used. - # NOTE: There can be a slight difference between width and height - # scales, using a single number can results in numerical difference - # compared with D2's post-processing. - scale = target_height / image_size[0] - im_info.append([image_size[0], image_size[1], scale]) - im_info = torch.Tensor(im_info) - - return images.tensor.to(device), im_info.to(device) - - -class Caffe2MetaArch(Caffe2Compatible, torch.nn.Module): - """ - Base class for caffe2-compatible implementation of a meta architecture. - The forward is traceable and its traced graph can be converted to caffe2 - graph through ONNX. - """ - - def __init__(self, cfg, torch_model): - """ - Args: - cfg (CfgNode): - torch_model (nn.Module): the detectron2 model (meta_arch) to be - converted. - """ - super().__init__() - self._wrapped_model = torch_model - self.eval() - set_caffe2_compatible_tensor_mode(self, True) - - def get_caffe2_inputs(self, batched_inputs): - """ - Convert pytorch-style structured inputs to caffe2-style inputs that - are tuples of tensors. - - Args: - batched_inputs (list[dict]): inputs to a detectron2 model - in its standard format. Each dict has "image" (CHW tensor), and optionally - "height" and "width". - - Returns: - tuple[Tensor]: - tuple of tensors that will be the inputs to the - :meth:`forward` method. For existing models, the first - is an NCHW tensor (padded and batched); the second is - a im_info Nx3 tensor, where the rows are - (height, width, unused legacy parameter) - """ - return convert_batched_inputs_to_c2_format( - batched_inputs, - self._wrapped_model.backbone.size_divisibility, - self._wrapped_model.device, - ) - - def encode_additional_info(self, predict_net, init_net): - """ - Save extra metadata that will be used by inference in the output protobuf. - """ - pass - - def forward(self, inputs): - """ - Run the forward in caffe2-style. It has to use caffe2-compatible ops - and the method will be used for tracing. - - Args: - inputs (tuple[Tensor]): inputs defined by :meth:`get_caffe2_input`. - They will be the inputs of the converted caffe2 graph. - - Returns: - tuple[Tensor]: output tensors. They will be the outputs of the - converted caffe2 graph. - """ - raise NotImplementedError - - def _caffe2_preprocess_image(self, inputs): - """ - Caffe2 implementation of preprocess_image, which is called inside each MetaArch's forward. - It normalizes the input images, and the final caffe2 graph assumes the - inputs have been batched already. - """ - data, im_info = inputs - data = alias(data, "data") - im_info = alias(im_info, "im_info") - mean, std = self._wrapped_model.pixel_mean, self._wrapped_model.pixel_std - normalized_data = (data - mean) / std - normalized_data = alias(normalized_data, "normalized_data") - - # Pack (data, im_info) into ImageList which is recognized by self.inference. - images = ImageList(tensor=normalized_data, image_sizes=im_info) - return images - - @staticmethod - def get_outputs_converter(predict_net, init_net): - """ - Creates a function that converts outputs of the caffe2 model to - detectron2's standard format. - The function uses information in `predict_net` and `init_net` that are - available at inferene time. Therefore the function logic can be used in inference. - - The returned function has the following signature: - - def convert(batched_inputs, c2_inputs, c2_results) -> detectron2_outputs - - Where - - * batched_inputs (list[dict]): the original input format of the meta arch - * c2_inputs (tuple[Tensor]): the caffe2 inputs. - * c2_results (dict[str, Tensor]): the caffe2 output format, - corresponding to the outputs of the :meth:`forward` function. - * detectron2_outputs: the original output format of the meta arch. - - This function can be used to compare the outputs of the original meta arch and - the converted caffe2 graph. - - Returns: - callable: a callable of the above signature. - """ - raise NotImplementedError - - -class Caffe2GeneralizedRCNN(Caffe2MetaArch): - def __init__(self, cfg, torch_model): - assert isinstance(torch_model, meta_arch.GeneralizedRCNN) - torch_model = patch_generalized_rcnn(torch_model) - super().__init__(cfg, torch_model) - - try: - use_heatmap_max_keypoint = cfg.EXPORT_CAFFE2.USE_HEATMAP_MAX_KEYPOINT - except AttributeError: - use_heatmap_max_keypoint = False - self.roi_heads_patcher = ROIHeadsPatcher( - self._wrapped_model.roi_heads, use_heatmap_max_keypoint - ) - - def encode_additional_info(self, predict_net, init_net): - size_divisibility = self._wrapped_model.backbone.size_divisibility - check_set_pb_arg(predict_net, "size_divisibility", "i", size_divisibility) - check_set_pb_arg( - predict_net, "device", "s", str.encode(str(self._wrapped_model.device), "ascii") - ) - check_set_pb_arg(predict_net, "meta_architecture", "s", b"GeneralizedRCNN") - - @mock_torch_nn_functional_interpolate() - def forward(self, inputs): - if not self.tensor_mode: - return self._wrapped_model.inference(inputs) - images = self._caffe2_preprocess_image(inputs) - features = self._wrapped_model.backbone(images.tensor) - proposals, _ = self._wrapped_model.proposal_generator(images, features) - with self.roi_heads_patcher.mock_roi_heads(): - detector_results, _ = self._wrapped_model.roi_heads(images, features, proposals) - return tuple(detector_results[0].flatten()) - - @staticmethod - def get_outputs_converter(predict_net, init_net): - def f(batched_inputs, c2_inputs, c2_results): - _, im_info = c2_inputs - image_sizes = [[int(im[0]), int(im[1])] for im in im_info] - results = assemble_rcnn_outputs_by_name(image_sizes, c2_results) - return meta_arch.GeneralizedRCNN._postprocess(results, batched_inputs, image_sizes) - - return f - - -class Caffe2RetinaNet(Caffe2MetaArch): - def __init__(self, cfg, torch_model): - assert isinstance(torch_model, meta_arch.RetinaNet) - super().__init__(cfg, torch_model) - - @mock_torch_nn_functional_interpolate() - def forward(self, inputs): - assert self.tensor_mode - images = self._caffe2_preprocess_image(inputs) - - # explicitly return the images sizes to avoid removing "im_info" by ONNX - # since it's not used in the forward path - return_tensors = [images.image_sizes] - - features = self._wrapped_model.backbone(images.tensor) - features = [features[f] for f in self._wrapped_model.head_in_features] - for i, feature_i in enumerate(features): - features[i] = alias(feature_i, "feature_{}".format(i), is_backward=True) - return_tensors.append(features[i]) - - pred_logits, pred_anchor_deltas = self._wrapped_model.head(features) - for i, (box_cls_i, box_delta_i) in enumerate(zip(pred_logits, pred_anchor_deltas)): - return_tensors.append(alias(box_cls_i, "box_cls_{}".format(i))) - return_tensors.append(alias(box_delta_i, "box_delta_{}".format(i))) - - return tuple(return_tensors) - - def encode_additional_info(self, predict_net, init_net): - size_divisibility = self._wrapped_model.backbone.size_divisibility - check_set_pb_arg(predict_net, "size_divisibility", "i", size_divisibility) - check_set_pb_arg( - predict_net, "device", "s", str.encode(str(self._wrapped_model.device), "ascii") - ) - check_set_pb_arg(predict_net, "meta_architecture", "s", b"RetinaNet") - - # Inference parameters: - check_set_pb_arg( - predict_net, "score_threshold", "f", _cast_to_f32(self._wrapped_model.test_score_thresh) - ) - check_set_pb_arg( - predict_net, "topk_candidates", "i", self._wrapped_model.test_topk_candidates - ) - check_set_pb_arg( - predict_net, "nms_threshold", "f", _cast_to_f32(self._wrapped_model.test_nms_thresh) - ) - check_set_pb_arg( - predict_net, - "max_detections_per_image", - "i", - self._wrapped_model.max_detections_per_image, - ) - - check_set_pb_arg( - predict_net, - "bbox_reg_weights", - "floats", - [_cast_to_f32(w) for w in self._wrapped_model.box2box_transform.weights], - ) - self._encode_anchor_generator_cfg(predict_net) - - def _encode_anchor_generator_cfg(self, predict_net): - # serialize anchor_generator for future use - serialized_anchor_generator = io.BytesIO() - torch.save(self._wrapped_model.anchor_generator, serialized_anchor_generator) - # Ideally we can put anchor generating inside the model, then we don't - # need to store this information. - bytes = serialized_anchor_generator.getvalue() - check_set_pb_arg(predict_net, "serialized_anchor_generator", "s", bytes) - - @staticmethod - def get_outputs_converter(predict_net, init_net): - self = types.SimpleNamespace() - serialized_anchor_generator = io.BytesIO( - get_pb_arg_vals(predict_net, "serialized_anchor_generator", None) - ) - self.anchor_generator = torch.load(serialized_anchor_generator) - bbox_reg_weights = get_pb_arg_floats(predict_net, "bbox_reg_weights", None) - self.box2box_transform = Box2BoxTransform(weights=tuple(bbox_reg_weights)) - self.test_score_thresh = get_pb_arg_valf(predict_net, "score_threshold", None) - self.test_topk_candidates = get_pb_arg_vali(predict_net, "topk_candidates", None) - self.test_nms_thresh = get_pb_arg_valf(predict_net, "nms_threshold", None) - self.max_detections_per_image = get_pb_arg_vali( - predict_net, "max_detections_per_image", None - ) - - # hack to reuse inference code from RetinaNet - for meth in [ - "forward_inference", - "inference_single_image", - "_transpose_dense_predictions", - "_decode_multi_level_predictions", - "_decode_per_level_predictions", - ]: - setattr(self, meth, functools.partial(getattr(meta_arch.RetinaNet, meth), self)) - - def f(batched_inputs, c2_inputs, c2_results): - _, im_info = c2_inputs - image_sizes = [[int(im[0]), int(im[1])] for im in im_info] - dummy_images = ImageList( - torch.randn( - ( - len(im_info), - 3, - ) - + tuple(image_sizes[0]) - ), - image_sizes, - ) - - num_features = len([x for x in c2_results.keys() if x.startswith("box_cls_")]) - pred_logits = [c2_results["box_cls_{}".format(i)] for i in range(num_features)] - pred_anchor_deltas = [c2_results["box_delta_{}".format(i)] for i in range(num_features)] - - # For each feature level, feature should have the same batch size and - # spatial dimension as the box_cls and box_delta. - dummy_features = [x.clone()[:, 0:0, :, :] for x in pred_logits] - # self.num_classess can be inferred - self.num_classes = pred_logits[0].shape[1] // (pred_anchor_deltas[0].shape[1] // 4) - - results = self.forward_inference( - dummy_images, dummy_features, [pred_logits, pred_anchor_deltas] - ) - return meta_arch.GeneralizedRCNN._postprocess(results, batched_inputs, image_sizes) - - return f - - -META_ARCH_CAFFE2_EXPORT_TYPE_MAP = { - "GeneralizedRCNN": Caffe2GeneralizedRCNN, - "RetinaNet": Caffe2RetinaNet, -} diff --git a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/utils/__init__.py b/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/utils/__init__.py deleted file mode 100644 index 9020c2df23e2af280b7bb168b996ae9eaf312eb8..0000000000000000000000000000000000000000 --- a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/utils/__init__.py +++ /dev/null @@ -1 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. diff --git a/spaces/Bart92/RVC_HF/infer/modules/train/extract/extract_f0_rmvpe.py b/spaces/Bart92/RVC_HF/infer/modules/train/extract/extract_f0_rmvpe.py deleted file mode 100644 index c6c90440d9e612b37c6d5a514786a6d0fffb19ba..0000000000000000000000000000000000000000 --- a/spaces/Bart92/RVC_HF/infer/modules/train/extract/extract_f0_rmvpe.py +++ /dev/null @@ -1,141 +0,0 @@ -import os -import sys -import traceback - -import parselmouth - -now_dir = os.getcwd() -sys.path.append(now_dir) -import logging - -import numpy as np -import pyworld - -from infer.lib.audio import load_audio - -logging.getLogger("numba").setLevel(logging.WARNING) - -n_part = int(sys.argv[1]) -i_part = int(sys.argv[2]) -i_gpu = sys.argv[3] -os.environ["CUDA_VISIBLE_DEVICES"] = str(i_gpu) -exp_dir = sys.argv[4] -is_half = sys.argv[5] -f = open("%s/extract_f0_feature.log" % exp_dir, "a+") - - -def printt(strr): - print(strr) - f.write("%s\n" % strr) - f.flush() - - -class FeatureInput(object): - def __init__(self, samplerate=16000, hop_size=160): - self.fs = samplerate - self.hop = hop_size - - self.f0_bin = 256 - self.f0_max = 1100.0 - self.f0_min = 50.0 - self.f0_mel_min = 1127 * np.log(1 + self.f0_min / 700) - self.f0_mel_max = 1127 * np.log(1 + self.f0_max / 700) - - def compute_f0(self, path, f0_method): - x = load_audio(path, self.fs) - # p_len = x.shape[0] // self.hop - if f0_method == "rmvpe": - if hasattr(self, "model_rmvpe") == False: - from infer.lib.rmvpe import RMVPE - - print("Loading rmvpe model") - self.model_rmvpe = RMVPE( - "assets/rmvpe/rmvpe.pt", is_half=is_half, device="cuda" - ) - f0 = self.model_rmvpe.infer_from_audio(x, thred=0.03) - return f0 - - def coarse_f0(self, f0): - f0_mel = 1127 * np.log(1 + f0 / 700) - f0_mel[f0_mel > 0] = (f0_mel[f0_mel > 0] - self.f0_mel_min) * ( - self.f0_bin - 2 - ) / (self.f0_mel_max - self.f0_mel_min) + 1 - - # use 0 or 1 - f0_mel[f0_mel <= 1] = 1 - f0_mel[f0_mel > self.f0_bin - 1] = self.f0_bin - 1 - f0_coarse = np.rint(f0_mel).astype(int) - assert f0_coarse.max() <= 255 and f0_coarse.min() >= 1, ( - f0_coarse.max(), - f0_coarse.min(), - ) - return f0_coarse - - def go(self, paths, f0_method): - if len(paths) == 0: - printt("no-f0-todo") - else: - printt("todo-f0-%s" % len(paths)) - n = max(len(paths) // 5, 1) # 每个进程最多打印5条 - for idx, (inp_path, opt_path1, opt_path2) in enumerate(paths): - try: - if idx % n == 0: - printt("f0ing,now-%s,all-%s,-%s" % (idx, len(paths), inp_path)) - if ( - os.path.exists(opt_path1 + ".npy") == True - and os.path.exists(opt_path2 + ".npy") == True - ): - continue - featur_pit = self.compute_f0(inp_path, f0_method) - np.save( - opt_path2, - featur_pit, - allow_pickle=False, - ) # nsf - coarse_pit = self.coarse_f0(featur_pit) - np.save( - opt_path1, - coarse_pit, - allow_pickle=False, - ) # ori - except: - printt("f0fail-%s-%s-%s" % (idx, inp_path, traceback.format_exc())) - - -if __name__ == "__main__": - # exp_dir=r"E:\codes\py39\dataset\mi-test" - # n_p=16 - # f = open("%s/log_extract_f0.log"%exp_dir, "w") - printt(sys.argv) - featureInput = FeatureInput() - paths = [] - inp_root = "%s/1_16k_wavs" % (exp_dir) - opt_root1 = "%s/2a_f0" % (exp_dir) - opt_root2 = "%s/2b-f0nsf" % (exp_dir) - - os.makedirs(opt_root1, exist_ok=True) - os.makedirs(opt_root2, exist_ok=True) - for name in sorted(list(os.listdir(inp_root))): - inp_path = "%s/%s" % (inp_root, name) - if "spec" in inp_path: - continue - opt_path1 = "%s/%s" % (opt_root1, name) - opt_path2 = "%s/%s" % (opt_root2, name) - paths.append([inp_path, opt_path1, opt_path2]) - try: - featureInput.go(paths[i_part::n_part], "rmvpe") - except: - printt("f0_all_fail-%s" % (traceback.format_exc())) - # ps = [] - # for i in range(n_p): - # p = Process( - # target=featureInput.go, - # args=( - # paths[i::n_p], - # f0method, - # ), - # ) - # ps.append(p) - # p.start() - # for i in range(n_p): - # ps[i].join() diff --git a/spaces/Benson/text-generation/Examples/Descarga De La Red M.hollywoodbets.net.md b/spaces/Benson/text-generation/Examples/Descarga De La Red M.hollywoodbets.net.md deleted file mode 100644 index e9dd1693d2367b79141fa776613f355ed7efb9d5..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/Descarga De La Red M.hollywoodbets.net.md +++ /dev/null @@ -1,104 +0,0 @@ -
-

Cómo descargar y usar m.hollywoodbets.net

-

Si usted está buscando una manera conveniente y rápida de apostar en deportes, carreras de caballos, juegos de casino, y más, es posible que desee descargar y utilizar m.hollywoodbets.net. Esta es la versión móvil de Hollywoodbets, una de las plataformas de apuestas en línea más populares en Sudáfrica. En este artículo, le mostraremos lo que es m.hollywoodbets.net, por qué debe descargarlo, cómo descargarlo, cómo usarlo y cómo resolver algunos problemas comunes con él.

-

descarga de la red m.hollywoodbets.net


Downloadhttps://bltlly.com/2v6JvX



-

¿Qué es m.hollywoodbets.net?

-

m.hollywoodbets.net es el sitio móvil de Hollywoodbets, un operador de apuestas con licencia que ofrece una amplia gama de opciones de apuestas en varios deportes y eventos. Puedes apostar en fútbol, rugby, cricket, tenis, golf, baloncesto, etc. También puedes apostar en carreras de caballos desde Sudáfrica y otros países. Además, puedes jugar juegos de casino, tragamonedas, números de la suerte, betgames, juegos en vivo y más. Puede acceder a todas estas funciones desde su dispositivo móvil usando m.hollywoodbets.net.

-

¿Por qué descargar m.hollywoodbets.net?

-

Hay muchos beneficios de descargar y usar m.hollywoodbets.net. Estos son algunos de ellos:

-
    -
  • Conveniencia: Puedes apostar en cualquier momento y en cualquier lugar usando tu dispositivo móvil. No necesita un ordenador o un navegador para acceder al sitio. Simplemente puede tocar el icono de la aplicación y comenzar a apostar.
  • -
  • Velocidad: El sitio móvil está optimizado para una carga rápida y un rendimiento suave. Usted puede hacer sus apuestas rápida y fácilmente sin ningún tipo de retrasos o fallos.
  • -
  • Acceso sin datos: Puedes acceder al sitio móvil sin usar ningún dato. Hollywoodbets se ha asociado con varios proveedores de red para ofrecer acceso sin datos a sus clientes. Puede comprobar si su proveedor de red es compatible visitando [1](https://sport.hollywoodbets.net/).
  • - -
-

¿Cómo descargar m.hollywoodbets.net?

-

Si tienes un dispositivo Android, puedes descargar e instalar la aplicación para m.hollywoodbets.net siguiendo estos pasos:

-
    -
  1. Visite [1](https://sport.hollywoodbets.net/) desde su navegador móvil e inicie sesión en su cuenta. Si aún no tiene una cuenta, puede registrarla haciendo clic en "Unirse ahora".
  2. -
  3. Desplácese hacia abajo a la parte inferior de la página y haga clic en "Aplicación para teléfono de características básicas". Esto lo redirigirá a un sitio donde puede descargar la aplicación.
  4. -
  5. Haga clic en "Descargar aplicación Android" y esperar a que la descarga se complete.
  6. -
  7. Vaya a la configuración de seguridad y permita la instalación desde fuentes desconocidas.
  8. -
  9. Abra el archivo descargado e instale la aplicación en su dispositivo.
  10. -
-

Tenga en cuenta que no hay una aplicación oficial para dispositivos iOS, por lo que tendrá que utilizar la versión del navegador móvil si tiene un iPhone o iPad.

-

¿Cómo usar m.hollywoodbets.net?

-

Usar m.hollywoodbets.net es fácil y simple. Estos son algunos pasos básicos para empezar:

-

-
    -
  1. Inicie sesión en su cuenta usando su nombre de usuario y contraseña. Si olvidó su contraseña, puede restablecerla haciendo clic en "Olvidé la contraseña".
  2. -
  3. Elija la categoría de apuestas que desea hacer, como deportes, carreras de caballos, casino, etc. Puede utilizar el icono del menú en la esquina superior izquierda para navegar entre diferentes categorías.
  4. -
  5. Selecciona el evento o juego en el que quieres apostar. Puedes usar la barra de búsqueda o los filtros para encontrar lo que buscas.
  6. -
  7. Elija el mercado y las probabilidades en las que desea apostar. Puede pulsar en las probabilidades para agregarlas a su boleto de apuesta.
  8. -
  9. Introduzca la cantidad que desea apostar y confirme su apuesta. También puede usar la función "Quick Bet" para realizar su apuesta más rápido.
  10. -
  11. Revise su historial de apuestas y balance haciendo clic en "Mi cuenta". También puede ver sus apuestas pendientes, apuestas liquidadas y apuestas abiertas.
  12. -
- -

Para depositar y retirar dinero usando m.hollywoodbets.net, necesita tener una cuenta verificada y una cuenta bancaria o tarjeta válida. Estos son algunos métodos que puedes usar:

- -
- - - - - - - - - - - - - - -

Para hacer un depósito, puedes seguir estos pasos:

-
    -
  1. Inicie sesión en su cuenta y haga clic en "Depositar".
  2. -
  3. Seleccione el método que desea utilizar e introduzca la cantidad que desea depositar.
  4. -
  5. Siga las instrucciones en la pantalla para completar la transacción.
  6. -
  7. Espera el mensaje de confirmación y comprueba tu saldo.
  8. -
-

Para realizar un retiro, puedes seguir estos pasos:

-
    -
  1. Inicie sesión en su cuenta y haga clic en "Retirar".
  2. -
  3. Seleccione el método que desea utilizar e introduzca la cantidad que desea retirar.
  4. -
  5. Introduzca los datos de su cuenta bancaria o tarjeta si es necesario.
  6. -
  7. Confirme su solicitud y espere la aprobación.
  8. -
  9. Revise su cuenta bancaria o extracto de la tarjeta para los fondos.
  10. -
-

¿Cómo contactar a atención al cliente usando m.hollywoodbets.net?

- - -

Problemas comunes con m.hollywoodbets.net y cómo resolverlos

-

Mientras m.hollywoodbets.net está diseñado para proporcionar una experiencia de apuestas sin problemas y sin problemas, es posible que encuentre algunos problemas con él de vez en cuando. Estos son algunos de los problemas comunes y cómo resolverlos:

- -

Conclusión

-

m.hollywoodbets.net es una gran manera de disfrutar de las apuestas en línea en su dispositivo móvil. Puede descargarlo y usarlo fácilmente y acceder a una variedad de opciones de apuestas, promociones y características. También puede depositar y retirar dinero de forma segura y ponerse en contacto con el servicio de atención al cliente convenientemente. Si encuentra algún problema con el sitio móvil, puede seguir los consejos anteriores o comunicarse con el servicio de atención al cliente para obtener ayuda. Entonces, ¿qué estás esperando? Descargar m.hollywoodbets.net hoy y empezar a apostar!

-

Preguntas frecuentes

-

¿Es m.hollywoodbets.net seguro y legal?

-

Sí, m.hollywoodbets.net es seguro y legal. Hollywoodbets está autorizado por el Western Cape Gambling and Racing Board y se adhiere a estrictas normas de seguridad. Todas las transacciones están encriptadas y protegidas por la tecnología SSL. Toda la información personal se mantiene confidencial y no se comparte con terceros.

-

¿Cuáles son las apuestas mínimas y máximas en m.hollywoodbets.net? La apuesta mínima en m.hollywoodbets.net es R1, mientras que la apuesta máxima depende del evento y el mercado en el que esté apostando. Puede comprobar la apuesta máxima haciendo clic en "Max Bet" en su boleto de apuesta.

-

¿Cómo puedo obtener apuestas gratis en m.hollywoodbets.net?

-

Hay varias formas de obtener apuestas gratis en m.hollywoodbets.net. Algunas de ellas son:

- -

¿Cómo puedo comprobar los resultados de mis apuestas en m.hollywoodbets.net?

-

Puede comprobar los resultados de sus apuestas en m.hollywoodbets.net haciendo clic en "Mi cuenta" y "Historial de apuestas". También puede utilizar la función "Resultados" en el sitio móvil para comprobar los resultados de varios eventos y juegos.

-

¿Cómo puedo actualizar mis datos personales en m.hollywoodbets.net?

-

Puede actualizar sus datos personales en m.hollywoodbets.net haciendo clic en "Mi cuenta" y "Datos personales". Puede cambiar su contraseña, dirección de correo electrónico, número de teléfono y pregunta de seguridad. Sin embargo, no puede cambiar su nombre, apellido, fecha de nacimiento o número de identificación. Si necesita cambiar estos datos, debe ponerse en contacto con el servicio de atención al cliente y proporcionarle una prueba de identidad.

64aa2da5cf
-
-
\ No newline at end of file diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/dateutil/zoneinfo/rebuild.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/dateutil/zoneinfo/rebuild.py deleted file mode 100644 index 684c6586f091350c347f2b6150935f5214ffec27..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/dateutil/zoneinfo/rebuild.py +++ /dev/null @@ -1,75 +0,0 @@ -import logging -import os -import tempfile -import shutil -import json -from subprocess import check_call, check_output -from tarfile import TarFile - -from dateutil.zoneinfo import METADATA_FN, ZONEFILENAME - - -def rebuild(filename, tag=None, format="gz", zonegroups=[], metadata=None): - """Rebuild the internal timezone info in dateutil/zoneinfo/zoneinfo*tar* - - filename is the timezone tarball from ``ftp.iana.org/tz``. - - """ - tmpdir = tempfile.mkdtemp() - zonedir = os.path.join(tmpdir, "zoneinfo") - moduledir = os.path.dirname(__file__) - try: - with TarFile.open(filename) as tf: - for name in zonegroups: - tf.extract(name, tmpdir) - filepaths = [os.path.join(tmpdir, n) for n in zonegroups] - - _run_zic(zonedir, filepaths) - - # write metadata file - with open(os.path.join(zonedir, METADATA_FN), 'w') as f: - json.dump(metadata, f, indent=4, sort_keys=True) - target = os.path.join(moduledir, ZONEFILENAME) - with TarFile.open(target, "w:%s" % format) as tf: - for entry in os.listdir(zonedir): - entrypath = os.path.join(zonedir, entry) - tf.add(entrypath, entry) - finally: - shutil.rmtree(tmpdir) - - -def _run_zic(zonedir, filepaths): - """Calls the ``zic`` compiler in a compatible way to get a "fat" binary. - - Recent versions of ``zic`` default to ``-b slim``, while older versions - don't even have the ``-b`` option (but default to "fat" binaries). The - current version of dateutil does not support Version 2+ TZif files, which - causes problems when used in conjunction with "slim" binaries, so this - function is used to ensure that we always get a "fat" binary. - """ - - try: - help_text = check_output(["zic", "--help"]) - except OSError as e: - _print_on_nosuchfile(e) - raise - - if b"-b " in help_text: - bloat_args = ["-b", "fat"] - else: - bloat_args = [] - - check_call(["zic"] + bloat_args + ["-d", zonedir] + filepaths) - - -def _print_on_nosuchfile(e): - """Print helpful troubleshooting message - - e is an exception raised by subprocess.check_call() - - """ - if e.errno == 2: - logging.error( - "Could not find zic. Perhaps you need to install " - "libc-bin or some other package that provides it, " - "or it's not in your PATH?") diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/chardet/metadata/languages.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/chardet/metadata/languages.py deleted file mode 100644 index eb40c5f0c8526208d434d762855d23079dc68b36..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/chardet/metadata/languages.py +++ /dev/null @@ -1,352 +0,0 @@ -""" -Metadata about languages used by our model training code for our -SingleByteCharSetProbers. Could be used for other things in the future. - -This code is based on the language metadata from the uchardet project. -""" - -from string import ascii_letters -from typing import List, Optional - -# TODO: Add Ukrainian (KOI8-U) - - -class Language: - """Metadata about a language useful for training models - - :ivar name: The human name for the language, in English. - :type name: str - :ivar iso_code: 2-letter ISO 639-1 if possible, 3-letter ISO code otherwise, - or use another catalog as a last resort. - :type iso_code: str - :ivar use_ascii: Whether or not ASCII letters should be included in trained - models. - :type use_ascii: bool - :ivar charsets: The charsets we want to support and create data for. - :type charsets: list of str - :ivar alphabet: The characters in the language's alphabet. If `use_ascii` is - `True`, you only need to add those not in the ASCII set. - :type alphabet: str - :ivar wiki_start_pages: The Wikipedia pages to start from if we're crawling - Wikipedia for training data. - :type wiki_start_pages: list of str - """ - - def __init__( - self, - name: Optional[str] = None, - iso_code: Optional[str] = None, - use_ascii: bool = True, - charsets: Optional[List[str]] = None, - alphabet: Optional[str] = None, - wiki_start_pages: Optional[List[str]] = None, - ) -> None: - super().__init__() - self.name = name - self.iso_code = iso_code - self.use_ascii = use_ascii - self.charsets = charsets - if self.use_ascii: - if alphabet: - alphabet += ascii_letters - else: - alphabet = ascii_letters - elif not alphabet: - raise ValueError("Must supply alphabet if use_ascii is False") - self.alphabet = "".join(sorted(set(alphabet))) if alphabet else None - self.wiki_start_pages = wiki_start_pages - - def __repr__(self) -> str: - param_str = ", ".join( - f"{k}={v!r}" for k, v in self.__dict__.items() if not k.startswith("_") - ) - return f"{self.__class__.__name__}({param_str})" - - -LANGUAGES = { - "Arabic": Language( - name="Arabic", - iso_code="ar", - use_ascii=False, - # We only support encodings that use isolated - # forms, because the current recommendation is - # that the rendering system handles presentation - # forms. This means we purposefully skip IBM864. - charsets=["ISO-8859-6", "WINDOWS-1256", "CP720", "CP864"], - alphabet="ءآأؤإئابةتثجحخدذرزسشصضطظعغػؼؽؾؿـفقكلمنهوىيًٌٍَُِّ", - wiki_start_pages=["الصفحة_الرئيسية"], - ), - "Belarusian": Language( - name="Belarusian", - iso_code="be", - use_ascii=False, - charsets=["ISO-8859-5", "WINDOWS-1251", "IBM866", "MacCyrillic"], - alphabet="АБВГДЕЁЖЗІЙКЛМНОПРСТУЎФХЦЧШЫЬЭЮЯабвгдеёжзійклмнопрстуўфхцчшыьэюяʼ", - wiki_start_pages=["Галоўная_старонка"], - ), - "Bulgarian": Language( - name="Bulgarian", - iso_code="bg", - use_ascii=False, - charsets=["ISO-8859-5", "WINDOWS-1251", "IBM855"], - alphabet="АБВГДЕЖЗИЙКЛМНОПРСТУФХЦЧШЩЪЬЮЯабвгдежзийклмнопрстуфхцчшщъьюя", - wiki_start_pages=["Начална_страница"], - ), - "Czech": Language( - name="Czech", - iso_code="cz", - use_ascii=True, - charsets=["ISO-8859-2", "WINDOWS-1250"], - alphabet="áčďéěíňóřšťúůýžÁČĎÉĚÍŇÓŘŠŤÚŮÝŽ", - wiki_start_pages=["Hlavní_strana"], - ), - "Danish": Language( - name="Danish", - iso_code="da", - use_ascii=True, - charsets=["ISO-8859-1", "ISO-8859-15", "WINDOWS-1252", "MacRoman"], - alphabet="æøåÆØÅ", - wiki_start_pages=["Forside"], - ), - "German": Language( - name="German", - iso_code="de", - use_ascii=True, - charsets=["ISO-8859-1", "ISO-8859-15", "WINDOWS-1252", "MacRoman"], - alphabet="äöüßẞÄÖÜ", - wiki_start_pages=["Wikipedia:Hauptseite"], - ), - "Greek": Language( - name="Greek", - iso_code="el", - use_ascii=False, - charsets=["ISO-8859-7", "WINDOWS-1253"], - alphabet="αβγδεζηθικλμνξοπρσςτυφχψωάέήίόύώΑΒΓΔΕΖΗΘΙΚΛΜΝΞΟΠΡΣΣΤΥΦΧΨΩΆΈΉΊΌΎΏ", - wiki_start_pages=["Πύλη:Κύρια"], - ), - "English": Language( - name="English", - iso_code="en", - use_ascii=True, - charsets=["ISO-8859-1", "WINDOWS-1252", "MacRoman"], - wiki_start_pages=["Main_Page"], - ), - "Esperanto": Language( - name="Esperanto", - iso_code="eo", - # Q, W, X, and Y not used at all - use_ascii=False, - charsets=["ISO-8859-3"], - alphabet="abcĉdefgĝhĥijĵklmnoprsŝtuŭvzABCĈDEFGĜHĤIJĴKLMNOPRSŜTUŬVZ", - wiki_start_pages=["Vikipedio:Ĉefpaĝo"], - ), - "Spanish": Language( - name="Spanish", - iso_code="es", - use_ascii=True, - charsets=["ISO-8859-1", "ISO-8859-15", "WINDOWS-1252", "MacRoman"], - alphabet="ñáéíóúüÑÁÉÍÓÚÜ", - wiki_start_pages=["Wikipedia:Portada"], - ), - "Estonian": Language( - name="Estonian", - iso_code="et", - use_ascii=False, - charsets=["ISO-8859-4", "ISO-8859-13", "WINDOWS-1257"], - # C, F, Š, Q, W, X, Y, Z, Ž are only for - # loanwords - alphabet="ABDEGHIJKLMNOPRSTUVÕÄÖÜabdeghijklmnoprstuvõäöü", - wiki_start_pages=["Esileht"], - ), - "Finnish": Language( - name="Finnish", - iso_code="fi", - use_ascii=True, - charsets=["ISO-8859-1", "ISO-8859-15", "WINDOWS-1252", "MacRoman"], - alphabet="ÅÄÖŠŽåäöšž", - wiki_start_pages=["Wikipedia:Etusivu"], - ), - "French": Language( - name="French", - iso_code="fr", - use_ascii=True, - charsets=["ISO-8859-1", "ISO-8859-15", "WINDOWS-1252", "MacRoman"], - alphabet="œàâçèéîïùûêŒÀÂÇÈÉÎÏÙÛÊ", - wiki_start_pages=["Wikipédia:Accueil_principal", "Bœuf (animal)"], - ), - "Hebrew": Language( - name="Hebrew", - iso_code="he", - use_ascii=False, - charsets=["ISO-8859-8", "WINDOWS-1255"], - alphabet="אבגדהוזחטיךכלםמןנסעףפץצקרשתװױײ", - wiki_start_pages=["עמוד_ראשי"], - ), - "Croatian": Language( - name="Croatian", - iso_code="hr", - # Q, W, X, Y are only used for foreign words. - use_ascii=False, - charsets=["ISO-8859-2", "WINDOWS-1250"], - alphabet="abcčćdđefghijklmnoprsštuvzžABCČĆDĐEFGHIJKLMNOPRSŠTUVZŽ", - wiki_start_pages=["Glavna_stranica"], - ), - "Hungarian": Language( - name="Hungarian", - iso_code="hu", - # Q, W, X, Y are only used for foreign words. - use_ascii=False, - charsets=["ISO-8859-2", "WINDOWS-1250"], - alphabet="abcdefghijklmnoprstuvzáéíóöőúüűABCDEFGHIJKLMNOPRSTUVZÁÉÍÓÖŐÚÜŰ", - wiki_start_pages=["Kezdőlap"], - ), - "Italian": Language( - name="Italian", - iso_code="it", - use_ascii=True, - charsets=["ISO-8859-1", "ISO-8859-15", "WINDOWS-1252", "MacRoman"], - alphabet="ÀÈÉÌÒÓÙàèéìòóù", - wiki_start_pages=["Pagina_principale"], - ), - "Lithuanian": Language( - name="Lithuanian", - iso_code="lt", - use_ascii=False, - charsets=["ISO-8859-13", "WINDOWS-1257", "ISO-8859-4"], - # Q, W, and X not used at all - alphabet="AĄBCČDEĘĖFGHIĮYJKLMNOPRSŠTUŲŪVZŽaąbcčdeęėfghiįyjklmnoprsštuųūvzž", - wiki_start_pages=["Pagrindinis_puslapis"], - ), - "Latvian": Language( - name="Latvian", - iso_code="lv", - use_ascii=False, - charsets=["ISO-8859-13", "WINDOWS-1257", "ISO-8859-4"], - # Q, W, X, Y are only for loanwords - alphabet="AĀBCČDEĒFGĢHIĪJKĶLĻMNŅOPRSŠTUŪVZŽaābcčdeēfgģhiījkķlļmnņoprsštuūvzž", - wiki_start_pages=["Sākumlapa"], - ), - "Macedonian": Language( - name="Macedonian", - iso_code="mk", - use_ascii=False, - charsets=["ISO-8859-5", "WINDOWS-1251", "MacCyrillic", "IBM855"], - alphabet="АБВГДЃЕЖЗЅИЈКЛЉМНЊОПРСТЌУФХЦЧЏШабвгдѓежзѕијклљмнњопрстќуфхцчџш", - wiki_start_pages=["Главна_страница"], - ), - "Dutch": Language( - name="Dutch", - iso_code="nl", - use_ascii=True, - charsets=["ISO-8859-1", "WINDOWS-1252", "MacRoman"], - wiki_start_pages=["Hoofdpagina"], - ), - "Polish": Language( - name="Polish", - iso_code="pl", - # Q and X are only used for foreign words. - use_ascii=False, - charsets=["ISO-8859-2", "WINDOWS-1250"], - alphabet="AĄBCĆDEĘFGHIJKLŁMNŃOÓPRSŚTUWYZŹŻaąbcćdeęfghijklłmnńoóprsśtuwyzźż", - wiki_start_pages=["Wikipedia:Strona_główna"], - ), - "Portuguese": Language( - name="Portuguese", - iso_code="pt", - use_ascii=True, - charsets=["ISO-8859-1", "ISO-8859-15", "WINDOWS-1252", "MacRoman"], - alphabet="ÁÂÃÀÇÉÊÍÓÔÕÚáâãàçéêíóôõú", - wiki_start_pages=["Wikipédia:Página_principal"], - ), - "Romanian": Language( - name="Romanian", - iso_code="ro", - use_ascii=True, - charsets=["ISO-8859-2", "WINDOWS-1250"], - alphabet="ăâîșțĂÂÎȘȚ", - wiki_start_pages=["Pagina_principală"], - ), - "Russian": Language( - name="Russian", - iso_code="ru", - use_ascii=False, - charsets=[ - "ISO-8859-5", - "WINDOWS-1251", - "KOI8-R", - "MacCyrillic", - "IBM866", - "IBM855", - ], - alphabet="абвгдеёжзийклмнопрстуфхцчшщъыьэюяАБВГДЕЁЖЗИЙКЛМНОПРСТУФХЦЧШЩЪЫЬЭЮЯ", - wiki_start_pages=["Заглавная_страница"], - ), - "Slovak": Language( - name="Slovak", - iso_code="sk", - use_ascii=True, - charsets=["ISO-8859-2", "WINDOWS-1250"], - alphabet="áäčďéíĺľňóôŕšťúýžÁÄČĎÉÍĹĽŇÓÔŔŠŤÚÝŽ", - wiki_start_pages=["Hlavná_stránka"], - ), - "Slovene": Language( - name="Slovene", - iso_code="sl", - # Q, W, X, Y are only used for foreign words. - use_ascii=False, - charsets=["ISO-8859-2", "WINDOWS-1250"], - alphabet="abcčdefghijklmnoprsštuvzžABCČDEFGHIJKLMNOPRSŠTUVZŽ", - wiki_start_pages=["Glavna_stran"], - ), - # Serbian can be written in both Latin and Cyrillic, but there's no - # simple way to get the Latin alphabet pages from Wikipedia through - # the API, so for now we just support Cyrillic. - "Serbian": Language( - name="Serbian", - iso_code="sr", - alphabet="АБВГДЂЕЖЗИЈКЛЉМНЊОПРСТЋУФХЦЧЏШабвгдђежзијклљмнњопрстћуфхцчџш", - charsets=["ISO-8859-5", "WINDOWS-1251", "MacCyrillic", "IBM855"], - wiki_start_pages=["Главна_страна"], - ), - "Thai": Language( - name="Thai", - iso_code="th", - use_ascii=False, - charsets=["ISO-8859-11", "TIS-620", "CP874"], - alphabet="กขฃคฅฆงจฉชซฌญฎฏฐฑฒณดตถทธนบปผฝพฟภมยรฤลฦวศษสหฬอฮฯะัาำิีึืฺุู฿เแโใไๅๆ็่้๊๋์ํ๎๏๐๑๒๓๔๕๖๗๘๙๚๛", - wiki_start_pages=["หน้าหลัก"], - ), - "Turkish": Language( - name="Turkish", - iso_code="tr", - # Q, W, and X are not used by Turkish - use_ascii=False, - charsets=["ISO-8859-3", "ISO-8859-9", "WINDOWS-1254"], - alphabet="abcçdefgğhıijklmnoöprsştuüvyzâîûABCÇDEFGĞHIİJKLMNOÖPRSŞTUÜVYZÂÎÛ", - wiki_start_pages=["Ana_Sayfa"], - ), - "Vietnamese": Language( - name="Vietnamese", - iso_code="vi", - use_ascii=False, - # Windows-1258 is the only common 8-bit - # Vietnamese encoding supported by Python. - # From Wikipedia: - # For systems that lack support for Unicode, - # dozens of 8-bit Vietnamese code pages are - # available.[1] The most common are VISCII - # (TCVN 5712:1993), VPS, and Windows-1258.[3] - # Where ASCII is required, such as when - # ensuring readability in plain text e-mail, - # Vietnamese letters are often encoded - # according to Vietnamese Quoted-Readable - # (VIQR) or VSCII Mnemonic (VSCII-MNEM),[4] - # though usage of either variable-width - # scheme has declined dramatically following - # the adoption of Unicode on the World Wide - # Web. - charsets=["WINDOWS-1258"], - alphabet="aăâbcdđeêghiklmnoôơpqrstuưvxyAĂÂBCDĐEÊGHIKLMNOÔƠPQRSTUƯVXY", - wiki_start_pages=["Chữ_Quốc_ngữ"], - ), -} diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/requests/cookies.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/requests/cookies.py deleted file mode 100644 index bf54ab237e410603061b8cec8fd195912d3cfb08..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/requests/cookies.py +++ /dev/null @@ -1,561 +0,0 @@ -""" -requests.cookies -~~~~~~~~~~~~~~~~ - -Compatibility code to be able to use `cookielib.CookieJar` with requests. - -requests.utils imports from here, so be careful with imports. -""" - -import calendar -import copy -import time - -from ._internal_utils import to_native_string -from .compat import Morsel, MutableMapping, cookielib, urlparse, urlunparse - -try: - import threading -except ImportError: - import dummy_threading as threading - - -class MockRequest: - """Wraps a `requests.Request` to mimic a `urllib2.Request`. - - The code in `cookielib.CookieJar` expects this interface in order to correctly - manage cookie policies, i.e., determine whether a cookie can be set, given the - domains of the request and the cookie. - - The original request object is read-only. The client is responsible for collecting - the new headers via `get_new_headers()` and interpreting them appropriately. You - probably want `get_cookie_header`, defined below. - """ - - def __init__(self, request): - self._r = request - self._new_headers = {} - self.type = urlparse(self._r.url).scheme - - def get_type(self): - return self.type - - def get_host(self): - return urlparse(self._r.url).netloc - - def get_origin_req_host(self): - return self.get_host() - - def get_full_url(self): - # Only return the response's URL if the user hadn't set the Host - # header - if not self._r.headers.get("Host"): - return self._r.url - # If they did set it, retrieve it and reconstruct the expected domain - host = to_native_string(self._r.headers["Host"], encoding="utf-8") - parsed = urlparse(self._r.url) - # Reconstruct the URL as we expect it - return urlunparse( - [ - parsed.scheme, - host, - parsed.path, - parsed.params, - parsed.query, - parsed.fragment, - ] - ) - - def is_unverifiable(self): - return True - - def has_header(self, name): - return name in self._r.headers or name in self._new_headers - - def get_header(self, name, default=None): - return self._r.headers.get(name, self._new_headers.get(name, default)) - - def add_header(self, key, val): - """cookielib has no legitimate use for this method; add it back if you find one.""" - raise NotImplementedError( - "Cookie headers should be added with add_unredirected_header()" - ) - - def add_unredirected_header(self, name, value): - self._new_headers[name] = value - - def get_new_headers(self): - return self._new_headers - - @property - def unverifiable(self): - return self.is_unverifiable() - - @property - def origin_req_host(self): - return self.get_origin_req_host() - - @property - def host(self): - return self.get_host() - - -class MockResponse: - """Wraps a `httplib.HTTPMessage` to mimic a `urllib.addinfourl`. - - ...what? Basically, expose the parsed HTTP headers from the server response - the way `cookielib` expects to see them. - """ - - def __init__(self, headers): - """Make a MockResponse for `cookielib` to read. - - :param headers: a httplib.HTTPMessage or analogous carrying the headers - """ - self._headers = headers - - def info(self): - return self._headers - - def getheaders(self, name): - self._headers.getheaders(name) - - -def extract_cookies_to_jar(jar, request, response): - """Extract the cookies from the response into a CookieJar. - - :param jar: cookielib.CookieJar (not necessarily a RequestsCookieJar) - :param request: our own requests.Request object - :param response: urllib3.HTTPResponse object - """ - if not (hasattr(response, "_original_response") and response._original_response): - return - # the _original_response field is the wrapped httplib.HTTPResponse object, - req = MockRequest(request) - # pull out the HTTPMessage with the headers and put it in the mock: - res = MockResponse(response._original_response.msg) - jar.extract_cookies(res, req) - - -def get_cookie_header(jar, request): - """ - Produce an appropriate Cookie header string to be sent with `request`, or None. - - :rtype: str - """ - r = MockRequest(request) - jar.add_cookie_header(r) - return r.get_new_headers().get("Cookie") - - -def remove_cookie_by_name(cookiejar, name, domain=None, path=None): - """Unsets a cookie by name, by default over all domains and paths. - - Wraps CookieJar.clear(), is O(n). - """ - clearables = [] - for cookie in cookiejar: - if cookie.name != name: - continue - if domain is not None and domain != cookie.domain: - continue - if path is not None and path != cookie.path: - continue - clearables.append((cookie.domain, cookie.path, cookie.name)) - - for domain, path, name in clearables: - cookiejar.clear(domain, path, name) - - -class CookieConflictError(RuntimeError): - """There are two cookies that meet the criteria specified in the cookie jar. - Use .get and .set and include domain and path args in order to be more specific. - """ - - -class RequestsCookieJar(cookielib.CookieJar, MutableMapping): - """Compatibility class; is a cookielib.CookieJar, but exposes a dict - interface. - - This is the CookieJar we create by default for requests and sessions that - don't specify one, since some clients may expect response.cookies and - session.cookies to support dict operations. - - Requests does not use the dict interface internally; it's just for - compatibility with external client code. All requests code should work - out of the box with externally provided instances of ``CookieJar``, e.g. - ``LWPCookieJar`` and ``FileCookieJar``. - - Unlike a regular CookieJar, this class is pickleable. - - .. warning:: dictionary operations that are normally O(1) may be O(n). - """ - - def get(self, name, default=None, domain=None, path=None): - """Dict-like get() that also supports optional domain and path args in - order to resolve naming collisions from using one cookie jar over - multiple domains. - - .. warning:: operation is O(n), not O(1). - """ - try: - return self._find_no_duplicates(name, domain, path) - except KeyError: - return default - - def set(self, name, value, **kwargs): - """Dict-like set() that also supports optional domain and path args in - order to resolve naming collisions from using one cookie jar over - multiple domains. - """ - # support client code that unsets cookies by assignment of a None value: - if value is None: - remove_cookie_by_name( - self, name, domain=kwargs.get("domain"), path=kwargs.get("path") - ) - return - - if isinstance(value, Morsel): - c = morsel_to_cookie(value) - else: - c = create_cookie(name, value, **kwargs) - self.set_cookie(c) - return c - - def iterkeys(self): - """Dict-like iterkeys() that returns an iterator of names of cookies - from the jar. - - .. seealso:: itervalues() and iteritems(). - """ - for cookie in iter(self): - yield cookie.name - - def keys(self): - """Dict-like keys() that returns a list of names of cookies from the - jar. - - .. seealso:: values() and items(). - """ - return list(self.iterkeys()) - - def itervalues(self): - """Dict-like itervalues() that returns an iterator of values of cookies - from the jar. - - .. seealso:: iterkeys() and iteritems(). - """ - for cookie in iter(self): - yield cookie.value - - def values(self): - """Dict-like values() that returns a list of values of cookies from the - jar. - - .. seealso:: keys() and items(). - """ - return list(self.itervalues()) - - def iteritems(self): - """Dict-like iteritems() that returns an iterator of name-value tuples - from the jar. - - .. seealso:: iterkeys() and itervalues(). - """ - for cookie in iter(self): - yield cookie.name, cookie.value - - def items(self): - """Dict-like items() that returns a list of name-value tuples from the - jar. Allows client-code to call ``dict(RequestsCookieJar)`` and get a - vanilla python dict of key value pairs. - - .. seealso:: keys() and values(). - """ - return list(self.iteritems()) - - def list_domains(self): - """Utility method to list all the domains in the jar.""" - domains = [] - for cookie in iter(self): - if cookie.domain not in domains: - domains.append(cookie.domain) - return domains - - def list_paths(self): - """Utility method to list all the paths in the jar.""" - paths = [] - for cookie in iter(self): - if cookie.path not in paths: - paths.append(cookie.path) - return paths - - def multiple_domains(self): - """Returns True if there are multiple domains in the jar. - Returns False otherwise. - - :rtype: bool - """ - domains = [] - for cookie in iter(self): - if cookie.domain is not None and cookie.domain in domains: - return True - domains.append(cookie.domain) - return False # there is only one domain in jar - - def get_dict(self, domain=None, path=None): - """Takes as an argument an optional domain and path and returns a plain - old Python dict of name-value pairs of cookies that meet the - requirements. - - :rtype: dict - """ - dictionary = {} - for cookie in iter(self): - if (domain is None or cookie.domain == domain) and ( - path is None or cookie.path == path - ): - dictionary[cookie.name] = cookie.value - return dictionary - - def __contains__(self, name): - try: - return super().__contains__(name) - except CookieConflictError: - return True - - def __getitem__(self, name): - """Dict-like __getitem__() for compatibility with client code. Throws - exception if there are more than one cookie with name. In that case, - use the more explicit get() method instead. - - .. warning:: operation is O(n), not O(1). - """ - return self._find_no_duplicates(name) - - def __setitem__(self, name, value): - """Dict-like __setitem__ for compatibility with client code. Throws - exception if there is already a cookie of that name in the jar. In that - case, use the more explicit set() method instead. - """ - self.set(name, value) - - def __delitem__(self, name): - """Deletes a cookie given a name. Wraps ``cookielib.CookieJar``'s - ``remove_cookie_by_name()``. - """ - remove_cookie_by_name(self, name) - - def set_cookie(self, cookie, *args, **kwargs): - if ( - hasattr(cookie.value, "startswith") - and cookie.value.startswith('"') - and cookie.value.endswith('"') - ): - cookie.value = cookie.value.replace('\\"', "") - return super().set_cookie(cookie, *args, **kwargs) - - def update(self, other): - """Updates this jar with cookies from another CookieJar or dict-like""" - if isinstance(other, cookielib.CookieJar): - for cookie in other: - self.set_cookie(copy.copy(cookie)) - else: - super().update(other) - - def _find(self, name, domain=None, path=None): - """Requests uses this method internally to get cookie values. - - If there are conflicting cookies, _find arbitrarily chooses one. - See _find_no_duplicates if you want an exception thrown if there are - conflicting cookies. - - :param name: a string containing name of cookie - :param domain: (optional) string containing domain of cookie - :param path: (optional) string containing path of cookie - :return: cookie.value - """ - for cookie in iter(self): - if cookie.name == name: - if domain is None or cookie.domain == domain: - if path is None or cookie.path == path: - return cookie.value - - raise KeyError(f"name={name!r}, domain={domain!r}, path={path!r}") - - def _find_no_duplicates(self, name, domain=None, path=None): - """Both ``__get_item__`` and ``get`` call this function: it's never - used elsewhere in Requests. - - :param name: a string containing name of cookie - :param domain: (optional) string containing domain of cookie - :param path: (optional) string containing path of cookie - :raises KeyError: if cookie is not found - :raises CookieConflictError: if there are multiple cookies - that match name and optionally domain and path - :return: cookie.value - """ - toReturn = None - for cookie in iter(self): - if cookie.name == name: - if domain is None or cookie.domain == domain: - if path is None or cookie.path == path: - if toReturn is not None: - # if there are multiple cookies that meet passed in criteria - raise CookieConflictError( - f"There are multiple cookies with name, {name!r}" - ) - # we will eventually return this as long as no cookie conflict - toReturn = cookie.value - - if toReturn: - return toReturn - raise KeyError(f"name={name!r}, domain={domain!r}, path={path!r}") - - def __getstate__(self): - """Unlike a normal CookieJar, this class is pickleable.""" - state = self.__dict__.copy() - # remove the unpickleable RLock object - state.pop("_cookies_lock") - return state - - def __setstate__(self, state): - """Unlike a normal CookieJar, this class is pickleable.""" - self.__dict__.update(state) - if "_cookies_lock" not in self.__dict__: - self._cookies_lock = threading.RLock() - - def copy(self): - """Return a copy of this RequestsCookieJar.""" - new_cj = RequestsCookieJar() - new_cj.set_policy(self.get_policy()) - new_cj.update(self) - return new_cj - - def get_policy(self): - """Return the CookiePolicy instance used.""" - return self._policy - - -def _copy_cookie_jar(jar): - if jar is None: - return None - - if hasattr(jar, "copy"): - # We're dealing with an instance of RequestsCookieJar - return jar.copy() - # We're dealing with a generic CookieJar instance - new_jar = copy.copy(jar) - new_jar.clear() - for cookie in jar: - new_jar.set_cookie(copy.copy(cookie)) - return new_jar - - -def create_cookie(name, value, **kwargs): - """Make a cookie from underspecified parameters. - - By default, the pair of `name` and `value` will be set for the domain '' - and sent on every request (this is sometimes called a "supercookie"). - """ - result = { - "version": 0, - "name": name, - "value": value, - "port": None, - "domain": "", - "path": "/", - "secure": False, - "expires": None, - "discard": True, - "comment": None, - "comment_url": None, - "rest": {"HttpOnly": None}, - "rfc2109": False, - } - - badargs = set(kwargs) - set(result) - if badargs: - raise TypeError( - f"create_cookie() got unexpected keyword arguments: {list(badargs)}" - ) - - result.update(kwargs) - result["port_specified"] = bool(result["port"]) - result["domain_specified"] = bool(result["domain"]) - result["domain_initial_dot"] = result["domain"].startswith(".") - result["path_specified"] = bool(result["path"]) - - return cookielib.Cookie(**result) - - -def morsel_to_cookie(morsel): - """Convert a Morsel object into a Cookie containing the one k/v pair.""" - - expires = None - if morsel["max-age"]: - try: - expires = int(time.time() + int(morsel["max-age"])) - except ValueError: - raise TypeError(f"max-age: {morsel['max-age']} must be integer") - elif morsel["expires"]: - time_template = "%a, %d-%b-%Y %H:%M:%S GMT" - expires = calendar.timegm(time.strptime(morsel["expires"], time_template)) - return create_cookie( - comment=morsel["comment"], - comment_url=bool(morsel["comment"]), - discard=False, - domain=morsel["domain"], - expires=expires, - name=morsel.key, - path=morsel["path"], - port=None, - rest={"HttpOnly": morsel["httponly"]}, - rfc2109=False, - secure=bool(morsel["secure"]), - value=morsel.value, - version=morsel["version"] or 0, - ) - - -def cookiejar_from_dict(cookie_dict, cookiejar=None, overwrite=True): - """Returns a CookieJar from a key/value dictionary. - - :param cookie_dict: Dict of key/values to insert into CookieJar. - :param cookiejar: (optional) A cookiejar to add the cookies to. - :param overwrite: (optional) If False, will not replace cookies - already in the jar with new ones. - :rtype: CookieJar - """ - if cookiejar is None: - cookiejar = RequestsCookieJar() - - if cookie_dict is not None: - names_from_jar = [cookie.name for cookie in cookiejar] - for name in cookie_dict: - if overwrite or (name not in names_from_jar): - cookiejar.set_cookie(create_cookie(name, cookie_dict[name])) - - return cookiejar - - -def merge_cookies(cookiejar, cookies): - """Add cookies to cookiejar and returns a merged CookieJar. - - :param cookiejar: CookieJar object to add the cookies to. - :param cookies: Dictionary or CookieJar object to be added. - :rtype: CookieJar - """ - if not isinstance(cookiejar, cookielib.CookieJar): - raise ValueError("You can only merge into CookieJar") - - if isinstance(cookies, dict): - cookiejar = cookiejar_from_dict(cookies, cookiejar=cookiejar, overwrite=False) - elif isinstance(cookies, cookielib.CookieJar): - try: - cookiejar.update(cookies) - except AttributeError: - for cookie_in_jar in cookies: - cookiejar.set_cookie(cookie_in_jar) - - return cookiejar diff --git a/spaces/CVPR/Dual-Key_Backdoor_Attacks/openvqa/openvqa/utils/ans_punct.py b/spaces/CVPR/Dual-Key_Backdoor_Attacks/openvqa/openvqa/utils/ans_punct.py deleted file mode 100644 index b5e5aa205ee578fd36b4d4b52524e8dcef5b3721..0000000000000000000000000000000000000000 --- a/spaces/CVPR/Dual-Key_Backdoor_Attacks/openvqa/openvqa/utils/ans_punct.py +++ /dev/null @@ -1,105 +0,0 @@ -# -------------------------------------------------------- -# OpenVQA -# Written by Yuhao Cui https://github.com/cuiyuhao1996 -# based on VQA Evaluation Code -# -------------------------------------------------------- - -import re - -contractions = { - "aint": "ain't", "arent": "aren't", "cant": "can't", "couldve": - "could've", "couldnt": "couldn't", "couldn'tve": "couldn't've", - "couldnt've": "couldn't've", "didnt": "didn't", "doesnt": - "doesn't", "dont": "don't", "hadnt": "hadn't", "hadnt've": - "hadn't've", "hadn'tve": "hadn't've", "hasnt": "hasn't", "havent": - "haven't", "hed": "he'd", "hed've": "he'd've", "he'dve": - "he'd've", "hes": "he's", "howd": "how'd", "howll": "how'll", - "hows": "how's", "Id've": "I'd've", "I'dve": "I'd've", "Im": - "I'm", "Ive": "I've", "isnt": "isn't", "itd": "it'd", "itd've": - "it'd've", "it'dve": "it'd've", "itll": "it'll", "let's": "let's", - "maam": "ma'am", "mightnt": "mightn't", "mightnt've": - "mightn't've", "mightn'tve": "mightn't've", "mightve": "might've", - "mustnt": "mustn't", "mustve": "must've", "neednt": "needn't", - "notve": "not've", "oclock": "o'clock", "oughtnt": "oughtn't", - "ow's'at": "'ow's'at", "'ows'at": "'ow's'at", "'ow'sat": - "'ow's'at", "shant": "shan't", "shed've": "she'd've", "she'dve": - "she'd've", "she's": "she's", "shouldve": "should've", "shouldnt": - "shouldn't", "shouldnt've": "shouldn't've", "shouldn'tve": - "shouldn't've", "somebody'd": "somebodyd", "somebodyd've": - "somebody'd've", "somebody'dve": "somebody'd've", "somebodyll": - "somebody'll", "somebodys": "somebody's", "someoned": "someone'd", - "someoned've": "someone'd've", "someone'dve": "someone'd've", - "someonell": "someone'll", "someones": "someone's", "somethingd": - "something'd", "somethingd've": "something'd've", "something'dve": - "something'd've", "somethingll": "something'll", "thats": - "that's", "thered": "there'd", "thered've": "there'd've", - "there'dve": "there'd've", "therere": "there're", "theres": - "there's", "theyd": "they'd", "theyd've": "they'd've", "they'dve": - "they'd've", "theyll": "they'll", "theyre": "they're", "theyve": - "they've", "twas": "'twas", "wasnt": "wasn't", "wed've": - "we'd've", "we'dve": "we'd've", "weve": "we've", "werent": - "weren't", "whatll": "what'll", "whatre": "what're", "whats": - "what's", "whatve": "what've", "whens": "when's", "whered": - "where'd", "wheres": "where's", "whereve": "where've", "whod": - "who'd", "whod've": "who'd've", "who'dve": "who'd've", "wholl": - "who'll", "whos": "who's", "whove": "who've", "whyll": "why'll", - "whyre": "why're", "whys": "why's", "wont": "won't", "wouldve": - "would've", "wouldnt": "wouldn't", "wouldnt've": "wouldn't've", - "wouldn'tve": "wouldn't've", "yall": "y'all", "yall'll": - "y'all'll", "y'allll": "y'all'll", "yall'd've": "y'all'd've", - "y'alld've": "y'all'd've", "y'all'dve": "y'all'd've", "youd": - "you'd", "youd've": "you'd've", "you'dve": "you'd've", "youll": - "you'll", "youre": "you're", "youve": "you've" -} - -manual_map = { 'none': '0', - 'zero': '0', - 'one': '1', - 'two': '2', - 'three': '3', - 'four': '4', - 'five': '5', - 'six': '6', - 'seven': '7', - 'eight': '8', - 'nine': '9', - 'ten': '10'} -articles = ['a', 'an', 'the'] -period_strip = re.compile("(?!<=\d)(\.)(?!\d)") -comma_strip = re.compile("(\d)(\,)(\d)") -punct = [';', r"/", '[', ']', '"', '{', '}', - '(', ')', '=', '+', '\\', '_', '-', - '>', '<', '@', '`', ',', '?', '!'] - -def process_punctuation(inText): - outText = inText - for p in punct: - if (p + ' ' in inText or ' ' + p in inText) \ - or (re.search(comma_strip, inText) != None): - outText = outText.replace(p, '') - else: - outText = outText.replace(p, ' ') - outText = period_strip.sub("", outText, re.UNICODE) - return outText - - -def process_digit_article(inText): - outText = [] - tempText = inText.lower().split() - for word in tempText: - word = manual_map.setdefault(word, word) - if word not in articles: - outText.append(word) - else: - pass - for wordId, word in enumerate(outText): - if word in contractions: - outText[wordId] = contractions[word] - outText = ' '.join(outText) - return outText - - -def prep_ans(answer): - answer = process_digit_article(process_punctuation(answer)) - answer = answer.replace(',', '') - return answer diff --git a/spaces/CVPR/LIVE/thrust/thrust/detail/config/config.h b/spaces/CVPR/LIVE/thrust/thrust/detail/config/config.h deleted file mode 100644 index 800bc4c51a8bedd5dc922da8a980dc62f02c62aa..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/thrust/detail/config/config.h +++ /dev/null @@ -1,39 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -/*! \file config.h - * \brief Defines platform configuration. - */ - -#pragma once - -// NOTE: The order of these #includes matters. - -#include -#include -#include -#include -#include -// host_system.h & device_system.h must be #included as early as possible -// because other config headers depend on it -#include -#include -#include -#include -#include -#include -#include - diff --git a/spaces/CVPR/LIVE/thrust/thrust/system/tbb/detail/binary_search.h b/spaces/CVPR/LIVE/thrust/thrust/system/tbb/detail/binary_search.h deleted file mode 100644 index 0847e5d1fdb3a446651897d62c959d56ad9dd1b9..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/thrust/system/tbb/detail/binary_search.h +++ /dev/null @@ -1,23 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -#pragma once - -#include - -// this system inherits binary_search -#include - diff --git a/spaces/CVPR/WALT/mmdet/core/evaluation/bbox_overlaps.py b/spaces/CVPR/WALT/mmdet/core/evaluation/bbox_overlaps.py deleted file mode 100644 index 93559ea0f25369d552a5365312fa32b9ffec9226..0000000000000000000000000000000000000000 --- a/spaces/CVPR/WALT/mmdet/core/evaluation/bbox_overlaps.py +++ /dev/null @@ -1,48 +0,0 @@ -import numpy as np - - -def bbox_overlaps(bboxes1, bboxes2, mode='iou', eps=1e-6): - """Calculate the ious between each bbox of bboxes1 and bboxes2. - - Args: - bboxes1(ndarray): shape (n, 4) - bboxes2(ndarray): shape (k, 4) - mode(str): iou (intersection over union) or iof (intersection - over foreground) - - Returns: - ious(ndarray): shape (n, k) - """ - - assert mode in ['iou', 'iof'] - - bboxes1 = bboxes1.astype(np.float32) - bboxes2 = bboxes2.astype(np.float32) - rows = bboxes1.shape[0] - cols = bboxes2.shape[0] - ious = np.zeros((rows, cols), dtype=np.float32) - if rows * cols == 0: - return ious - exchange = False - if bboxes1.shape[0] > bboxes2.shape[0]: - bboxes1, bboxes2 = bboxes2, bboxes1 - ious = np.zeros((cols, rows), dtype=np.float32) - exchange = True - area1 = (bboxes1[:, 2] - bboxes1[:, 0]) * (bboxes1[:, 3] - bboxes1[:, 1]) - area2 = (bboxes2[:, 2] - bboxes2[:, 0]) * (bboxes2[:, 3] - bboxes2[:, 1]) - for i in range(bboxes1.shape[0]): - x_start = np.maximum(bboxes1[i, 0], bboxes2[:, 0]) - y_start = np.maximum(bboxes1[i, 1], bboxes2[:, 1]) - x_end = np.minimum(bboxes1[i, 2], bboxes2[:, 2]) - y_end = np.minimum(bboxes1[i, 3], bboxes2[:, 3]) - overlap = np.maximum(x_end - x_start, 0) * np.maximum( - y_end - y_start, 0) - if mode == 'iou': - union = area1[i] + area2 - overlap - else: - union = area1[i] if not exchange else area2 - union = np.maximum(union, eps) - ious[i, :] = overlap / union - if exchange: - ious = ious.T - return ious diff --git a/spaces/CVPR/WALT/mmdet/models/losses/smooth_l1_loss.py b/spaces/CVPR/WALT/mmdet/models/losses/smooth_l1_loss.py deleted file mode 100644 index ec9c98a52d1932d6ccff18938c17c36755bf1baf..0000000000000000000000000000000000000000 --- a/spaces/CVPR/WALT/mmdet/models/losses/smooth_l1_loss.py +++ /dev/null @@ -1,139 +0,0 @@ -import mmcv -import torch -import torch.nn as nn - -from ..builder import LOSSES -from .utils import weighted_loss - - -@mmcv.jit(derivate=True, coderize=True) -@weighted_loss -def smooth_l1_loss(pred, target, beta=1.0): - """Smooth L1 loss. - - Args: - pred (torch.Tensor): The prediction. - target (torch.Tensor): The learning target of the prediction. - beta (float, optional): The threshold in the piecewise function. - Defaults to 1.0. - - Returns: - torch.Tensor: Calculated loss - """ - assert beta > 0 - assert pred.size() == target.size() and target.numel() > 0 - diff = torch.abs(pred - target) - loss = torch.where(diff < beta, 0.5 * diff * diff / beta, - diff - 0.5 * beta) - return loss - - -@mmcv.jit(derivate=True, coderize=True) -@weighted_loss -def l1_loss(pred, target): - """L1 loss. - - Args: - pred (torch.Tensor): The prediction. - target (torch.Tensor): The learning target of the prediction. - - Returns: - torch.Tensor: Calculated loss - """ - assert pred.size() == target.size() and target.numel() > 0 - loss = torch.abs(pred - target) - return loss - - -@LOSSES.register_module() -class SmoothL1Loss(nn.Module): - """Smooth L1 loss. - - Args: - beta (float, optional): The threshold in the piecewise function. - Defaults to 1.0. - reduction (str, optional): The method to reduce the loss. - Options are "none", "mean" and "sum". Defaults to "mean". - loss_weight (float, optional): The weight of loss. - """ - - def __init__(self, beta=1.0, reduction='mean', loss_weight=1.0): - super(SmoothL1Loss, self).__init__() - self.beta = beta - self.reduction = reduction - self.loss_weight = loss_weight - - def forward(self, - pred, - target, - weight=None, - avg_factor=None, - reduction_override=None, - **kwargs): - """Forward function. - - Args: - pred (torch.Tensor): The prediction. - target (torch.Tensor): The learning target of the prediction. - weight (torch.Tensor, optional): The weight of loss for each - prediction. Defaults to None. - avg_factor (int, optional): Average factor that is used to average - the loss. Defaults to None. - reduction_override (str, optional): The reduction method used to - override the original reduction method of the loss. - Defaults to None. - """ - assert reduction_override in (None, 'none', 'mean', 'sum') - reduction = ( - reduction_override if reduction_override else self.reduction) - loss_bbox = self.loss_weight * smooth_l1_loss( - pred, - target, - weight, - beta=self.beta, - reduction=reduction, - avg_factor=avg_factor, - **kwargs) - return loss_bbox - - -@LOSSES.register_module() -class L1Loss(nn.Module): - """L1 loss. - - Args: - reduction (str, optional): The method to reduce the loss. - Options are "none", "mean" and "sum". - loss_weight (float, optional): The weight of loss. - """ - - def __init__(self, reduction='mean', loss_weight=1.0): - super(L1Loss, self).__init__() - self.reduction = reduction - self.loss_weight = loss_weight - - def forward(self, - pred, - target, - weight=None, - avg_factor=None, - reduction_override=None): - """Forward function. - - Args: - pred (torch.Tensor): The prediction. - target (torch.Tensor): The learning target of the prediction. - weight (torch.Tensor, optional): The weight of loss for each - prediction. Defaults to None. - avg_factor (int, optional): Average factor that is used to average - the loss. Defaults to None. - reduction_override (str, optional): The reduction method used to - override the original reduction method of the loss. - Defaults to None. - """ - assert reduction_override in (None, 'none', 'mean', 'sum') - reduction = ( - reduction_override if reduction_override else self.reduction) - loss_bbox = self.loss_weight * l1_loss( - pred, target, weight, reduction=reduction, avg_factor=avg_factor) - return loss_bbox diff --git a/spaces/CVPR/lama-example/saicinpainting/training/modules/__init__.py b/spaces/CVPR/lama-example/saicinpainting/training/modules/__init__.py deleted file mode 100644 index 82e1a9096a5bd8f3fb00e899d0239b078246cad4..0000000000000000000000000000000000000000 --- a/spaces/CVPR/lama-example/saicinpainting/training/modules/__init__.py +++ /dev/null @@ -1,31 +0,0 @@ -import logging - -from saicinpainting.training.modules.ffc import FFCResNetGenerator -from saicinpainting.training.modules.pix2pixhd import GlobalGenerator, MultiDilatedGlobalGenerator, \ - NLayerDiscriminator, MultidilatedNLayerDiscriminator - -def make_generator(config, kind, **kwargs): - logging.info(f'Make generator {kind}') - - if kind == 'pix2pixhd_multidilated': - return MultiDilatedGlobalGenerator(**kwargs) - - if kind == 'pix2pixhd_global': - return GlobalGenerator(**kwargs) - - if kind == 'ffc_resnet': - return FFCResNetGenerator(**kwargs) - - raise ValueError(f'Unknown generator kind {kind}') - - -def make_discriminator(kind, **kwargs): - logging.info(f'Make discriminator {kind}') - - if kind == 'pix2pixhd_nlayer_multidilated': - return MultidilatedNLayerDiscriminator(**kwargs) - - if kind == 'pix2pixhd_nlayer': - return NLayerDiscriminator(**kwargs) - - raise ValueError(f'Unknown discriminator kind {kind}') diff --git a/spaces/CVPR/regionclip-demo/detectron2/evaluation/cityscapes_evaluation.py b/spaces/CVPR/regionclip-demo/detectron2/evaluation/cityscapes_evaluation.py deleted file mode 100644 index 3fb6c4cd5f752d639570d022cb23ce18491c370a..0000000000000000000000000000000000000000 --- a/spaces/CVPR/regionclip-demo/detectron2/evaluation/cityscapes_evaluation.py +++ /dev/null @@ -1,194 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import glob -import logging -import numpy as np -import os -import tempfile -from collections import OrderedDict -import torch -from PIL import Image - -from detectron2.data import MetadataCatalog -from detectron2.utils import comm -from detectron2.utils.file_io import PathManager - -from .evaluator import DatasetEvaluator - - -class CityscapesEvaluator(DatasetEvaluator): - """ - Base class for evaluation using cityscapes API. - """ - - def __init__(self, dataset_name): - """ - Args: - dataset_name (str): the name of the dataset. - It must have the following metadata associated with it: - "thing_classes", "gt_dir". - """ - self._metadata = MetadataCatalog.get(dataset_name) - self._cpu_device = torch.device("cpu") - self._logger = logging.getLogger(__name__) - - def reset(self): - self._working_dir = tempfile.TemporaryDirectory(prefix="cityscapes_eval_") - self._temp_dir = self._working_dir.name - # All workers will write to the same results directory - # TODO this does not work in distributed training - self._temp_dir = comm.all_gather(self._temp_dir)[0] - if self._temp_dir != self._working_dir.name: - self._working_dir.cleanup() - self._logger.info( - "Writing cityscapes results to temporary directory {} ...".format(self._temp_dir) - ) - - -class CityscapesInstanceEvaluator(CityscapesEvaluator): - """ - Evaluate instance segmentation results on cityscapes dataset using cityscapes API. - - Note: - * It does not work in multi-machine distributed training. - * It contains a synchronization, therefore has to be used on all ranks. - * Only the main process runs evaluation. - """ - - def process(self, inputs, outputs): - from cityscapesscripts.helpers.labels import name2label - - for input, output in zip(inputs, outputs): - file_name = input["file_name"] - basename = os.path.splitext(os.path.basename(file_name))[0] - pred_txt = os.path.join(self._temp_dir, basename + "_pred.txt") - - if "instances" in output: - output = output["instances"].to(self._cpu_device) - num_instances = len(output) - with open(pred_txt, "w") as fout: - for i in range(num_instances): - pred_class = output.pred_classes[i] - classes = self._metadata.thing_classes[pred_class] - class_id = name2label[classes].id - score = output.scores[i] - mask = output.pred_masks[i].numpy().astype("uint8") - png_filename = os.path.join( - self._temp_dir, basename + "_{}_{}.png".format(i, classes) - ) - - Image.fromarray(mask * 255).save(png_filename) - fout.write( - "{} {} {}\n".format(os.path.basename(png_filename), class_id, score) - ) - else: - # Cityscapes requires a prediction file for every ground truth image. - with open(pred_txt, "w") as fout: - pass - - def evaluate(self): - """ - Returns: - dict: has a key "segm", whose value is a dict of "AP" and "AP50". - """ - comm.synchronize() - if comm.get_rank() > 0: - return - import cityscapesscripts.evaluation.evalInstanceLevelSemanticLabeling as cityscapes_eval - - self._logger.info("Evaluating results under {} ...".format(self._temp_dir)) - - # set some global states in cityscapes evaluation API, before evaluating - cityscapes_eval.args.predictionPath = os.path.abspath(self._temp_dir) - cityscapes_eval.args.predictionWalk = None - cityscapes_eval.args.JSONOutput = False - cityscapes_eval.args.colorized = False - cityscapes_eval.args.gtInstancesFile = os.path.join(self._temp_dir, "gtInstances.json") - - # These lines are adopted from - # https://github.com/mcordts/cityscapesScripts/blob/master/cityscapesscripts/evaluation/evalInstanceLevelSemanticLabeling.py # noqa - gt_dir = PathManager.get_local_path(self._metadata.gt_dir) - groundTruthImgList = glob.glob(os.path.join(gt_dir, "*", "*_gtFine_instanceIds.png")) - assert len( - groundTruthImgList - ), "Cannot find any ground truth images to use for evaluation. Searched for: {}".format( - cityscapes_eval.args.groundTruthSearch - ) - predictionImgList = [] - for gt in groundTruthImgList: - predictionImgList.append(cityscapes_eval.getPrediction(gt, cityscapes_eval.args)) - results = cityscapes_eval.evaluateImgLists( - predictionImgList, groundTruthImgList, cityscapes_eval.args - )["averages"] - - ret = OrderedDict() - ret["segm"] = {"AP": results["allAp"] * 100, "AP50": results["allAp50%"] * 100} - self._working_dir.cleanup() - return ret - - -class CityscapesSemSegEvaluator(CityscapesEvaluator): - """ - Evaluate semantic segmentation results on cityscapes dataset using cityscapes API. - - Note: - * It does not work in multi-machine distributed training. - * It contains a synchronization, therefore has to be used on all ranks. - * Only the main process runs evaluation. - """ - - def process(self, inputs, outputs): - from cityscapesscripts.helpers.labels import trainId2label - - for input, output in zip(inputs, outputs): - file_name = input["file_name"] - basename = os.path.splitext(os.path.basename(file_name))[0] - pred_filename = os.path.join(self._temp_dir, basename + "_pred.png") - - output = output["sem_seg"].argmax(dim=0).to(self._cpu_device).numpy() - pred = 255 * np.ones(output.shape, dtype=np.uint8) - for train_id, label in trainId2label.items(): - if label.ignoreInEval: - continue - pred[output == train_id] = label.id - Image.fromarray(pred).save(pred_filename) - - def evaluate(self): - comm.synchronize() - if comm.get_rank() > 0: - return - # Load the Cityscapes eval script *after* setting the required env var, - # since the script reads CITYSCAPES_DATASET into global variables at load time. - import cityscapesscripts.evaluation.evalPixelLevelSemanticLabeling as cityscapes_eval - - self._logger.info("Evaluating results under {} ...".format(self._temp_dir)) - - # set some global states in cityscapes evaluation API, before evaluating - cityscapes_eval.args.predictionPath = os.path.abspath(self._temp_dir) - cityscapes_eval.args.predictionWalk = None - cityscapes_eval.args.JSONOutput = False - cityscapes_eval.args.colorized = False - - # These lines are adopted from - # https://github.com/mcordts/cityscapesScripts/blob/master/cityscapesscripts/evaluation/evalPixelLevelSemanticLabeling.py # noqa - gt_dir = PathManager.get_local_path(self._metadata.gt_dir) - groundTruthImgList = glob.glob(os.path.join(gt_dir, "*", "*_gtFine_labelIds.png")) - assert len( - groundTruthImgList - ), "Cannot find any ground truth images to use for evaluation. Searched for: {}".format( - cityscapes_eval.args.groundTruthSearch - ) - predictionImgList = [] - for gt in groundTruthImgList: - predictionImgList.append(cityscapes_eval.getPrediction(cityscapes_eval.args, gt)) - results = cityscapes_eval.evaluateImgLists( - predictionImgList, groundTruthImgList, cityscapes_eval.args - ) - ret = OrderedDict() - ret["sem_seg"] = { - "IoU": 100.0 * results["averageScoreClasses"], - "iIoU": 100.0 * results["averageScoreInstClasses"], - "IoU_sup": 100.0 * results["averageScoreCategories"], - "iIoU_sup": 100.0 * results["averageScoreInstCategories"], - } - self._working_dir.cleanup() - return ret diff --git a/spaces/CognitiveLabs/Research-Assistant/README.md b/spaces/CognitiveLabs/Research-Assistant/README.md deleted file mode 100644 index 7a4e2ed156df4e83de5f21e3b8c463ee5c0ac09d..0000000000000000000000000000000000000000 --- a/spaces/CognitiveLabs/Research-Assistant/README.md +++ /dev/null @@ -1,78 +0,0 @@ ---- -title: AI-Research-Assistant -app_file: app.py -sdk: gradio -sdk_version: 3.38.0 -duplicated_from: zej97/AI-Research-Assistant ---- -
- -
- English | - 中文 -
-
- -Inspired by [gpt-researcher](https://github.com/assafelovic/gpt-researcher). This project endeavors to develop an AI research assistant capable of **generating research reports** effortlessly for researchers. For instance, researchers can request the AI research assistant to compose a report on *the latest advancements in the field of superconductors as of 2023*, which is currently a trending topic. The AI research assistant will subsequently compile a report based on the relevant information obtained from the internet. Now, AIRA also offers support for **academic English polishing**. - - -| Example1-1 | Example1-2 | Example1-3 | -| :----------------------------------: | :----------------------------------: | :----------------------------------: | -| | | | - -The currently supported agents encompass a wide range of fields, including *finance, business analysis, clinical medicine, basic medicine, travel, academic research and sociology*. - -In addition to official api, this project offers an alternative approach to generating research reports by utilizing a third-party API. For access to this third-party API, please refer to [chimeragpt](https://chimeragpt.adventblocks.cc/) or [GPT-API-free](https://github.com/chatanywhere/GPT_API_free). Before running the project, kindly ensure that you set the environment variables `OPENAI_API_KEY` and `OPENAI_API_BASE`. - -```shell -$ export OPENAI_API_KEY = your_api_key -$ export OPENAI_API_BASE = your_api_base -``` - -or you can set the api key and base in `.env` file. - - -## Installation - -1. Clone the repository - - ```shell - $ git clone git@github.com:paradoxtown/ai_research_assistant.git - $ cd ai_research_assistant - ``` - -2. Install the dependencies - - ```shell - $ pip install -r requirements.txt - ``` - -3. Export evnironment variables - - ```shell - $ export OPENAI_API_KEY = your_api_key - $ export OPENAI_API_BASE = your_api_base - ``` - or modify the `.env` file. - -4. Run the project - - ```shell - $ python app.py - ``` - -## TODO - -- [x] Switch Google Search to DuckDuckGo -- [ ] Literature review -- [x] Third-party API -- [ ] Prettify report -- [x] Add medical agent and social agent -- [ ] Add option for users to customize the number of words and temperature -- [ ] Copy and download buttons -- [ ] Allows the user to choose the degree of research. -- [ ] Wikipedia Understanding - ---- - -
Happy researching! 🚀
\ No newline at end of file diff --git a/spaces/Cropinky/hana_hanak_houses/networks_fastgan.py b/spaces/Cropinky/hana_hanak_houses/networks_fastgan.py deleted file mode 100644 index d517e6b53b7bb6d83ce5df00b5111073e3cf3c24..0000000000000000000000000000000000000000 --- a/spaces/Cropinky/hana_hanak_houses/networks_fastgan.py +++ /dev/null @@ -1,179 +0,0 @@ -# original implementation: https://github.com/odegeasslbc/FastGAN-pytorch/blob/main/models.py -# -# modified by Axel Sauer for "Projected GANs Converge Faster" -# -import torch.nn as nn -from blocks import (InitLayer, UpBlockBig, UpBlockBigCond, UpBlockSmall, UpBlockSmallCond, SEBlock, conv2d) -from huggingface_hub import PyTorchModelHubMixin - -def normalize_second_moment(x, dim=1, eps=1e-8): - return x * (x.square().mean(dim=dim, keepdim=True) + eps).rsqrt() - - -class DummyMapping(nn.Module): - def __init__(self): - super().__init__() - - def forward(self, z, c, **kwargs): - return z.unsqueeze(1) # to fit the StyleGAN API - - -class FastganSynthesis(nn.Module): - def __init__(self, ngf=128, z_dim=256, nc=3, img_resolution=256, lite=False): - super().__init__() - self.img_resolution = img_resolution - self.z_dim = z_dim - - # channel multiplier - nfc_multi = {2: 16, 4:16, 8:8, 16:4, 32:2, 64:2, 128:1, 256:0.5, - 512:0.25, 1024:0.125} - nfc = {} - for k, v in nfc_multi.items(): - nfc[k] = int(v*ngf) - - # layers - self.init = InitLayer(z_dim, channel=nfc[2], sz=4) - - UpBlock = UpBlockSmall if lite else UpBlockBig - - self.feat_8 = UpBlock(nfc[4], nfc[8]) - self.feat_16 = UpBlock(nfc[8], nfc[16]) - self.feat_32 = UpBlock(nfc[16], nfc[32]) - self.feat_64 = UpBlock(nfc[32], nfc[64]) - self.feat_128 = UpBlock(nfc[64], nfc[128]) - self.feat_256 = UpBlock(nfc[128], nfc[256]) - - self.se_64 = SEBlock(nfc[4], nfc[64]) - self.se_128 = SEBlock(nfc[8], nfc[128]) - self.se_256 = SEBlock(nfc[16], nfc[256]) - - self.to_big = conv2d(nfc[img_resolution], nc, 3, 1, 1, bias=True) - - if img_resolution > 256: - self.feat_512 = UpBlock(nfc[256], nfc[512]) - self.se_512 = SEBlock(nfc[32], nfc[512]) - if img_resolution > 512: - self.feat_1024 = UpBlock(nfc[512], nfc[1024]) - - def forward(self, input, c, **kwargs): - # map noise to hypersphere as in "Progressive Growing of GANS" - input = normalize_second_moment(input[:, 0]) - - feat_4 = self.init(input) - feat_8 = self.feat_8(feat_4) - feat_16 = self.feat_16(feat_8) - feat_32 = self.feat_32(feat_16) - feat_64 = self.se_64(feat_4, self.feat_64(feat_32)) - feat_128 = self.se_128(feat_8, self.feat_128(feat_64)) - - if self.img_resolution >= 128: - feat_last = feat_128 - - if self.img_resolution >= 256: - feat_last = self.se_256(feat_16, self.feat_256(feat_last)) - - if self.img_resolution >= 512: - feat_last = self.se_512(feat_32, self.feat_512(feat_last)) - - if self.img_resolution >= 1024: - feat_last = self.feat_1024(feat_last) - - return self.to_big(feat_last) - - -class FastganSynthesisCond(nn.Module): - def __init__(self, ngf=64, z_dim=256, nc=3, img_resolution=256, num_classes=1000, lite=False): - super().__init__() - - self.z_dim = z_dim - nfc_multi = {2: 16, 4:16, 8:8, 16:4, 32:2, 64:2, 128:1, 256:0.5, - 512:0.25, 1024:0.125, 2048:0.125} - nfc = {} - for k, v in nfc_multi.items(): - nfc[k] = int(v*ngf) - - self.img_resolution = img_resolution - - self.init = InitLayer(z_dim, channel=nfc[2], sz=4) - - UpBlock = UpBlockSmallCond if lite else UpBlockBigCond - - self.feat_8 = UpBlock(nfc[4], nfc[8], z_dim) - self.feat_16 = UpBlock(nfc[8], nfc[16], z_dim) - self.feat_32 = UpBlock(nfc[16], nfc[32], z_dim) - self.feat_64 = UpBlock(nfc[32], nfc[64], z_dim) - self.feat_128 = UpBlock(nfc[64], nfc[128], z_dim) - self.feat_256 = UpBlock(nfc[128], nfc[256], z_dim) - - self.se_64 = SEBlock(nfc[4], nfc[64]) - self.se_128 = SEBlock(nfc[8], nfc[128]) - self.se_256 = SEBlock(nfc[16], nfc[256]) - - self.to_big = conv2d(nfc[img_resolution], nc, 3, 1, 1, bias=True) - - if img_resolution > 256: - self.feat_512 = UpBlock(nfc[256], nfc[512]) - self.se_512 = SEBlock(nfc[32], nfc[512]) - if img_resolution > 512: - self.feat_1024 = UpBlock(nfc[512], nfc[1024]) - - self.embed = nn.Embedding(num_classes, z_dim) - - def forward(self, input, c, update_emas=False): - c = self.embed(c.argmax(1)) - - # map noise to hypersphere as in "Progressive Growing of GANS" - input = normalize_second_moment(input[:, 0]) - - feat_4 = self.init(input) - feat_8 = self.feat_8(feat_4, c) - feat_16 = self.feat_16(feat_8, c) - feat_32 = self.feat_32(feat_16, c) - feat_64 = self.se_64(feat_4, self.feat_64(feat_32, c)) - feat_128 = self.se_128(feat_8, self.feat_128(feat_64, c)) - - if self.img_resolution >= 128: - feat_last = feat_128 - - if self.img_resolution >= 256: - feat_last = self.se_256(feat_16, self.feat_256(feat_last, c)) - - if self.img_resolution >= 512: - feat_last = self.se_512(feat_32, self.feat_512(feat_last, c)) - - if self.img_resolution >= 1024: - feat_last = self.feat_1024(feat_last, c) - - return self.to_big(feat_last) - - -class MyGenerator(nn.Module, PyTorchModelHubMixin): - def __init__( - self, - z_dim=256, - c_dim=0, - w_dim=0, - img_resolution=256, - img_channels=3, - ngf=128, - cond=0, - mapping_kwargs={}, - synthesis_kwargs={} - ): - super().__init__() - #self.config = kwargs.pop("config", None) - self.z_dim = z_dim - self.c_dim = c_dim - self.w_dim = w_dim - self.img_resolution = img_resolution - self.img_channels = img_channels - - # Mapping and Synthesis Networks - self.mapping = DummyMapping() # to fit the StyleGAN API - Synthesis = FastganSynthesisCond if cond else FastganSynthesis - self.synthesis = Synthesis(ngf=ngf, z_dim=z_dim, nc=img_channels, img_resolution=img_resolution, **synthesis_kwargs) - - def forward(self, z, c, **kwargs): - w = self.mapping(z, c) - img = self.synthesis(w, c) - return img diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/PIL/McIdasImagePlugin.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/PIL/McIdasImagePlugin.py deleted file mode 100644 index 17c008b9a6a1218f6e51add4fda83acb92ee06ce..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/PIL/McIdasImagePlugin.py +++ /dev/null @@ -1,75 +0,0 @@ -# -# The Python Imaging Library. -# $Id$ -# -# Basic McIdas support for PIL -# -# History: -# 1997-05-05 fl Created (8-bit images only) -# 2009-03-08 fl Added 16/32-bit support. -# -# Thanks to Richard Jones and Craig Swank for specs and samples. -# -# Copyright (c) Secret Labs AB 1997. -# Copyright (c) Fredrik Lundh 1997. -# -# See the README file for information on usage and redistribution. -# - -import struct - -from . import Image, ImageFile - - -def _accept(s): - return s[:8] == b"\x00\x00\x00\x00\x00\x00\x00\x04" - - -## -# Image plugin for McIdas area images. - - -class McIdasImageFile(ImageFile.ImageFile): - format = "MCIDAS" - format_description = "McIdas area file" - - def _open(self): - # parse area file directory - s = self.fp.read(256) - if not _accept(s) or len(s) != 256: - msg = "not an McIdas area file" - raise SyntaxError(msg) - - self.area_descriptor_raw = s - self.area_descriptor = w = [0] + list(struct.unpack("!64i", s)) - - # get mode - if w[11] == 1: - mode = rawmode = "L" - elif w[11] == 2: - # FIXME: add memory map support - mode = "I" - rawmode = "I;16B" - elif w[11] == 4: - # FIXME: add memory map support - mode = "I" - rawmode = "I;32B" - else: - msg = "unsupported McIdas format" - raise SyntaxError(msg) - - self.mode = mode - self._size = w[10], w[9] - - offset = w[34] + w[15] - stride = w[15] + w[10] * w[11] * w[14] - - self.tile = [("raw", (0, 0) + self.size, offset, (rawmode, stride, 1))] - - -# -------------------------------------------------------------------- -# registry - -Image.register_open(McIdasImageFile.format, McIdasImageFile, _accept) - -# no default extension diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/click/_termui_impl.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/click/_termui_impl.py deleted file mode 100644 index f744657753caa6cdef1dcc41a4f0b5e3e9503ab8..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/click/_termui_impl.py +++ /dev/null @@ -1,739 +0,0 @@ -""" -This module contains implementations for the termui module. To keep the -import time of Click down, some infrequently used functionality is -placed in this module and only imported as needed. -""" -import contextlib -import math -import os -import sys -import time -import typing as t -from gettext import gettext as _ -from io import StringIO -from types import TracebackType - -from ._compat import _default_text_stdout -from ._compat import CYGWIN -from ._compat import get_best_encoding -from ._compat import isatty -from ._compat import open_stream -from ._compat import strip_ansi -from ._compat import term_len -from ._compat import WIN -from .exceptions import ClickException -from .utils import echo - -V = t.TypeVar("V") - -if os.name == "nt": - BEFORE_BAR = "\r" - AFTER_BAR = "\n" -else: - BEFORE_BAR = "\r\033[?25l" - AFTER_BAR = "\033[?25h\n" - - -class ProgressBar(t.Generic[V]): - def __init__( - self, - iterable: t.Optional[t.Iterable[V]], - length: t.Optional[int] = None, - fill_char: str = "#", - empty_char: str = " ", - bar_template: str = "%(bar)s", - info_sep: str = " ", - show_eta: bool = True, - show_percent: t.Optional[bool] = None, - show_pos: bool = False, - item_show_func: t.Optional[t.Callable[[t.Optional[V]], t.Optional[str]]] = None, - label: t.Optional[str] = None, - file: t.Optional[t.TextIO] = None, - color: t.Optional[bool] = None, - update_min_steps: int = 1, - width: int = 30, - ) -> None: - self.fill_char = fill_char - self.empty_char = empty_char - self.bar_template = bar_template - self.info_sep = info_sep - self.show_eta = show_eta - self.show_percent = show_percent - self.show_pos = show_pos - self.item_show_func = item_show_func - self.label: str = label or "" - - if file is None: - file = _default_text_stdout() - - # There are no standard streams attached to write to. For example, - # pythonw on Windows. - if file is None: - file = StringIO() - - self.file = file - self.color = color - self.update_min_steps = update_min_steps - self._completed_intervals = 0 - self.width: int = width - self.autowidth: bool = width == 0 - - if length is None: - from operator import length_hint - - length = length_hint(iterable, -1) - - if length == -1: - length = None - if iterable is None: - if length is None: - raise TypeError("iterable or length is required") - iterable = t.cast(t.Iterable[V], range(length)) - self.iter: t.Iterable[V] = iter(iterable) - self.length = length - self.pos = 0 - self.avg: t.List[float] = [] - self.last_eta: float - self.start: float - self.start = self.last_eta = time.time() - self.eta_known: bool = False - self.finished: bool = False - self.max_width: t.Optional[int] = None - self.entered: bool = False - self.current_item: t.Optional[V] = None - self.is_hidden: bool = not isatty(self.file) - self._last_line: t.Optional[str] = None - - def __enter__(self) -> "ProgressBar[V]": - self.entered = True - self.render_progress() - return self - - def __exit__( - self, - exc_type: t.Optional[t.Type[BaseException]], - exc_value: t.Optional[BaseException], - tb: t.Optional[TracebackType], - ) -> None: - self.render_finish() - - def __iter__(self) -> t.Iterator[V]: - if not self.entered: - raise RuntimeError("You need to use progress bars in a with block.") - self.render_progress() - return self.generator() - - def __next__(self) -> V: - # Iteration is defined in terms of a generator function, - # returned by iter(self); use that to define next(). This works - # because `self.iter` is an iterable consumed by that generator, - # so it is re-entry safe. Calling `next(self.generator())` - # twice works and does "what you want". - return next(iter(self)) - - def render_finish(self) -> None: - if self.is_hidden: - return - self.file.write(AFTER_BAR) - self.file.flush() - - @property - def pct(self) -> float: - if self.finished: - return 1.0 - return min(self.pos / (float(self.length or 1) or 1), 1.0) - - @property - def time_per_iteration(self) -> float: - if not self.avg: - return 0.0 - return sum(self.avg) / float(len(self.avg)) - - @property - def eta(self) -> float: - if self.length is not None and not self.finished: - return self.time_per_iteration * (self.length - self.pos) - return 0.0 - - def format_eta(self) -> str: - if self.eta_known: - t = int(self.eta) - seconds = t % 60 - t //= 60 - minutes = t % 60 - t //= 60 - hours = t % 24 - t //= 24 - if t > 0: - return f"{t}d {hours:02}:{minutes:02}:{seconds:02}" - else: - return f"{hours:02}:{minutes:02}:{seconds:02}" - return "" - - def format_pos(self) -> str: - pos = str(self.pos) - if self.length is not None: - pos += f"/{self.length}" - return pos - - def format_pct(self) -> str: - return f"{int(self.pct * 100): 4}%"[1:] - - def format_bar(self) -> str: - if self.length is not None: - bar_length = int(self.pct * self.width) - bar = self.fill_char * bar_length - bar += self.empty_char * (self.width - bar_length) - elif self.finished: - bar = self.fill_char * self.width - else: - chars = list(self.empty_char * (self.width or 1)) - if self.time_per_iteration != 0: - chars[ - int( - (math.cos(self.pos * self.time_per_iteration) / 2.0 + 0.5) - * self.width - ) - ] = self.fill_char - bar = "".join(chars) - return bar - - def format_progress_line(self) -> str: - show_percent = self.show_percent - - info_bits = [] - if self.length is not None and show_percent is None: - show_percent = not self.show_pos - - if self.show_pos: - info_bits.append(self.format_pos()) - if show_percent: - info_bits.append(self.format_pct()) - if self.show_eta and self.eta_known and not self.finished: - info_bits.append(self.format_eta()) - if self.item_show_func is not None: - item_info = self.item_show_func(self.current_item) - if item_info is not None: - info_bits.append(item_info) - - return ( - self.bar_template - % { - "label": self.label, - "bar": self.format_bar(), - "info": self.info_sep.join(info_bits), - } - ).rstrip() - - def render_progress(self) -> None: - import shutil - - if self.is_hidden: - # Only output the label as it changes if the output is not a - # TTY. Use file=stderr if you expect to be piping stdout. - if self._last_line != self.label: - self._last_line = self.label - echo(self.label, file=self.file, color=self.color) - - return - - buf = [] - # Update width in case the terminal has been resized - if self.autowidth: - old_width = self.width - self.width = 0 - clutter_length = term_len(self.format_progress_line()) - new_width = max(0, shutil.get_terminal_size().columns - clutter_length) - if new_width < old_width: - buf.append(BEFORE_BAR) - buf.append(" " * self.max_width) # type: ignore - self.max_width = new_width - self.width = new_width - - clear_width = self.width - if self.max_width is not None: - clear_width = self.max_width - - buf.append(BEFORE_BAR) - line = self.format_progress_line() - line_len = term_len(line) - if self.max_width is None or self.max_width < line_len: - self.max_width = line_len - - buf.append(line) - buf.append(" " * (clear_width - line_len)) - line = "".join(buf) - # Render the line only if it changed. - - if line != self._last_line: - self._last_line = line - echo(line, file=self.file, color=self.color, nl=False) - self.file.flush() - - def make_step(self, n_steps: int) -> None: - self.pos += n_steps - if self.length is not None and self.pos >= self.length: - self.finished = True - - if (time.time() - self.last_eta) < 1.0: - return - - self.last_eta = time.time() - - # self.avg is a rolling list of length <= 7 of steps where steps are - # defined as time elapsed divided by the total progress through - # self.length. - if self.pos: - step = (time.time() - self.start) / self.pos - else: - step = time.time() - self.start - - self.avg = self.avg[-6:] + [step] - - self.eta_known = self.length is not None - - def update(self, n_steps: int, current_item: t.Optional[V] = None) -> None: - """Update the progress bar by advancing a specified number of - steps, and optionally set the ``current_item`` for this new - position. - - :param n_steps: Number of steps to advance. - :param current_item: Optional item to set as ``current_item`` - for the updated position. - - .. versionchanged:: 8.0 - Added the ``current_item`` optional parameter. - - .. versionchanged:: 8.0 - Only render when the number of steps meets the - ``update_min_steps`` threshold. - """ - if current_item is not None: - self.current_item = current_item - - self._completed_intervals += n_steps - - if self._completed_intervals >= self.update_min_steps: - self.make_step(self._completed_intervals) - self.render_progress() - self._completed_intervals = 0 - - def finish(self) -> None: - self.eta_known = False - self.current_item = None - self.finished = True - - def generator(self) -> t.Iterator[V]: - """Return a generator which yields the items added to the bar - during construction, and updates the progress bar *after* the - yielded block returns. - """ - # WARNING: the iterator interface for `ProgressBar` relies on - # this and only works because this is a simple generator which - # doesn't create or manage additional state. If this function - # changes, the impact should be evaluated both against - # `iter(bar)` and `next(bar)`. `next()` in particular may call - # `self.generator()` repeatedly, and this must remain safe in - # order for that interface to work. - if not self.entered: - raise RuntimeError("You need to use progress bars in a with block.") - - if self.is_hidden: - yield from self.iter - else: - for rv in self.iter: - self.current_item = rv - - # This allows show_item_func to be updated before the - # item is processed. Only trigger at the beginning of - # the update interval. - if self._completed_intervals == 0: - self.render_progress() - - yield rv - self.update(1) - - self.finish() - self.render_progress() - - -def pager(generator: t.Iterable[str], color: t.Optional[bool] = None) -> None: - """Decide what method to use for paging through text.""" - stdout = _default_text_stdout() - - # There are no standard streams attached to write to. For example, - # pythonw on Windows. - if stdout is None: - stdout = StringIO() - - if not isatty(sys.stdin) or not isatty(stdout): - return _nullpager(stdout, generator, color) - pager_cmd = (os.environ.get("PAGER", None) or "").strip() - if pager_cmd: - if WIN: - return _tempfilepager(generator, pager_cmd, color) - return _pipepager(generator, pager_cmd, color) - if os.environ.get("TERM") in ("dumb", "emacs"): - return _nullpager(stdout, generator, color) - if WIN or sys.platform.startswith("os2"): - return _tempfilepager(generator, "more <", color) - if hasattr(os, "system") and os.system("(less) 2>/dev/null") == 0: - return _pipepager(generator, "less", color) - - import tempfile - - fd, filename = tempfile.mkstemp() - os.close(fd) - try: - if hasattr(os, "system") and os.system(f'more "{filename}"') == 0: - return _pipepager(generator, "more", color) - return _nullpager(stdout, generator, color) - finally: - os.unlink(filename) - - -def _pipepager(generator: t.Iterable[str], cmd: str, color: t.Optional[bool]) -> None: - """Page through text by feeding it to another program. Invoking a - pager through this might support colors. - """ - import subprocess - - env = dict(os.environ) - - # If we're piping to less we might support colors under the - # condition that - cmd_detail = cmd.rsplit("/", 1)[-1].split() - if color is None and cmd_detail[0] == "less": - less_flags = f"{os.environ.get('LESS', '')}{' '.join(cmd_detail[1:])}" - if not less_flags: - env["LESS"] = "-R" - color = True - elif "r" in less_flags or "R" in less_flags: - color = True - - c = subprocess.Popen(cmd, shell=True, stdin=subprocess.PIPE, env=env) - stdin = t.cast(t.BinaryIO, c.stdin) - encoding = get_best_encoding(stdin) - try: - for text in generator: - if not color: - text = strip_ansi(text) - - stdin.write(text.encode(encoding, "replace")) - except (OSError, KeyboardInterrupt): - pass - else: - stdin.close() - - # Less doesn't respect ^C, but catches it for its own UI purposes (aborting - # search or other commands inside less). - # - # That means when the user hits ^C, the parent process (click) terminates, - # but less is still alive, paging the output and messing up the terminal. - # - # If the user wants to make the pager exit on ^C, they should set - # `LESS='-K'`. It's not our decision to make. - while True: - try: - c.wait() - except KeyboardInterrupt: - pass - else: - break - - -def _tempfilepager( - generator: t.Iterable[str], cmd: str, color: t.Optional[bool] -) -> None: - """Page through text by invoking a program on a temporary file.""" - import tempfile - - fd, filename = tempfile.mkstemp() - # TODO: This never terminates if the passed generator never terminates. - text = "".join(generator) - if not color: - text = strip_ansi(text) - encoding = get_best_encoding(sys.stdout) - with open_stream(filename, "wb")[0] as f: - f.write(text.encode(encoding)) - try: - os.system(f'{cmd} "{filename}"') - finally: - os.close(fd) - os.unlink(filename) - - -def _nullpager( - stream: t.TextIO, generator: t.Iterable[str], color: t.Optional[bool] -) -> None: - """Simply print unformatted text. This is the ultimate fallback.""" - for text in generator: - if not color: - text = strip_ansi(text) - stream.write(text) - - -class Editor: - def __init__( - self, - editor: t.Optional[str] = None, - env: t.Optional[t.Mapping[str, str]] = None, - require_save: bool = True, - extension: str = ".txt", - ) -> None: - self.editor = editor - self.env = env - self.require_save = require_save - self.extension = extension - - def get_editor(self) -> str: - if self.editor is not None: - return self.editor - for key in "VISUAL", "EDITOR": - rv = os.environ.get(key) - if rv: - return rv - if WIN: - return "notepad" - for editor in "sensible-editor", "vim", "nano": - if os.system(f"which {editor} >/dev/null 2>&1") == 0: - return editor - return "vi" - - def edit_file(self, filename: str) -> None: - import subprocess - - editor = self.get_editor() - environ: t.Optional[t.Dict[str, str]] = None - - if self.env: - environ = os.environ.copy() - environ.update(self.env) - - try: - c = subprocess.Popen(f'{editor} "{filename}"', env=environ, shell=True) - exit_code = c.wait() - if exit_code != 0: - raise ClickException( - _("{editor}: Editing failed").format(editor=editor) - ) - except OSError as e: - raise ClickException( - _("{editor}: Editing failed: {e}").format(editor=editor, e=e) - ) from e - - def edit(self, text: t.Optional[t.AnyStr]) -> t.Optional[t.AnyStr]: - import tempfile - - if not text: - data = b"" - elif isinstance(text, (bytes, bytearray)): - data = text - else: - if text and not text.endswith("\n"): - text += "\n" - - if WIN: - data = text.replace("\n", "\r\n").encode("utf-8-sig") - else: - data = text.encode("utf-8") - - fd, name = tempfile.mkstemp(prefix="editor-", suffix=self.extension) - f: t.BinaryIO - - try: - with os.fdopen(fd, "wb") as f: - f.write(data) - - # If the filesystem resolution is 1 second, like Mac OS - # 10.12 Extended, or 2 seconds, like FAT32, and the editor - # closes very fast, require_save can fail. Set the modified - # time to be 2 seconds in the past to work around this. - os.utime(name, (os.path.getatime(name), os.path.getmtime(name) - 2)) - # Depending on the resolution, the exact value might not be - # recorded, so get the new recorded value. - timestamp = os.path.getmtime(name) - - self.edit_file(name) - - if self.require_save and os.path.getmtime(name) == timestamp: - return None - - with open(name, "rb") as f: - rv = f.read() - - if isinstance(text, (bytes, bytearray)): - return rv - - return rv.decode("utf-8-sig").replace("\r\n", "\n") # type: ignore - finally: - os.unlink(name) - - -def open_url(url: str, wait: bool = False, locate: bool = False) -> int: - import subprocess - - def _unquote_file(url: str) -> str: - from urllib.parse import unquote - - if url.startswith("file://"): - url = unquote(url[7:]) - - return url - - if sys.platform == "darwin": - args = ["open"] - if wait: - args.append("-W") - if locate: - args.append("-R") - args.append(_unquote_file(url)) - null = open("/dev/null", "w") - try: - return subprocess.Popen(args, stderr=null).wait() - finally: - null.close() - elif WIN: - if locate: - url = _unquote_file(url.replace('"', "")) - args = f'explorer /select,"{url}"' - else: - url = url.replace('"', "") - wait_str = "/WAIT" if wait else "" - args = f'start {wait_str} "" "{url}"' - return os.system(args) - elif CYGWIN: - if locate: - url = os.path.dirname(_unquote_file(url).replace('"', "")) - args = f'cygstart "{url}"' - else: - url = url.replace('"', "") - wait_str = "-w" if wait else "" - args = f'cygstart {wait_str} "{url}"' - return os.system(args) - - try: - if locate: - url = os.path.dirname(_unquote_file(url)) or "." - else: - url = _unquote_file(url) - c = subprocess.Popen(["xdg-open", url]) - if wait: - return c.wait() - return 0 - except OSError: - if url.startswith(("http://", "https://")) and not locate and not wait: - import webbrowser - - webbrowser.open(url) - return 0 - return 1 - - -def _translate_ch_to_exc(ch: str) -> t.Optional[BaseException]: - if ch == "\x03": - raise KeyboardInterrupt() - - if ch == "\x04" and not WIN: # Unix-like, Ctrl+D - raise EOFError() - - if ch == "\x1a" and WIN: # Windows, Ctrl+Z - raise EOFError() - - return None - - -if WIN: - import msvcrt - - @contextlib.contextmanager - def raw_terminal() -> t.Iterator[int]: - yield -1 - - def getchar(echo: bool) -> str: - # The function `getch` will return a bytes object corresponding to - # the pressed character. Since Windows 10 build 1803, it will also - # return \x00 when called a second time after pressing a regular key. - # - # `getwch` does not share this probably-bugged behavior. Moreover, it - # returns a Unicode object by default, which is what we want. - # - # Either of these functions will return \x00 or \xe0 to indicate - # a special key, and you need to call the same function again to get - # the "rest" of the code. The fun part is that \u00e0 is - # "latin small letter a with grave", so if you type that on a French - # keyboard, you _also_ get a \xe0. - # E.g., consider the Up arrow. This returns \xe0 and then \x48. The - # resulting Unicode string reads as "a with grave" + "capital H". - # This is indistinguishable from when the user actually types - # "a with grave" and then "capital H". - # - # When \xe0 is returned, we assume it's part of a special-key sequence - # and call `getwch` again, but that means that when the user types - # the \u00e0 character, `getchar` doesn't return until a second - # character is typed. - # The alternative is returning immediately, but that would mess up - # cross-platform handling of arrow keys and others that start with - # \xe0. Another option is using `getch`, but then we can't reliably - # read non-ASCII characters, because return values of `getch` are - # limited to the current 8-bit codepage. - # - # Anyway, Click doesn't claim to do this Right(tm), and using `getwch` - # is doing the right thing in more situations than with `getch`. - func: t.Callable[[], str] - - if echo: - func = msvcrt.getwche # type: ignore - else: - func = msvcrt.getwch # type: ignore - - rv = func() - - if rv in ("\x00", "\xe0"): - # \x00 and \xe0 are control characters that indicate special key, - # see above. - rv += func() - - _translate_ch_to_exc(rv) - return rv - -else: - import tty - import termios - - @contextlib.contextmanager - def raw_terminal() -> t.Iterator[int]: - f: t.Optional[t.TextIO] - fd: int - - if not isatty(sys.stdin): - f = open("/dev/tty") - fd = f.fileno() - else: - fd = sys.stdin.fileno() - f = None - - try: - old_settings = termios.tcgetattr(fd) - - try: - tty.setraw(fd) - yield fd - finally: - termios.tcsetattr(fd, termios.TCSADRAIN, old_settings) - sys.stdout.flush() - - if f is not None: - f.close() - except termios.error: - pass - - def getchar(echo: bool) -> str: - with raw_terminal() as fd: - ch = os.read(fd, 32).decode(get_best_encoding(sys.stdin), "replace") - - if echo and isatty(sys.stdout): - sys.stdout.write(ch) - - _translate_ch_to_exc(ch) - return ch diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/assets/__vite-browser-external-b25bb000.js b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/assets/__vite-browser-external-b25bb000.js deleted file mode 100644 index efa8971d2172dd2c1924c07a4e2b2bc18871ccd9..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/assets/__vite-browser-external-b25bb000.js +++ /dev/null @@ -1,2 +0,0 @@ -const e={};export{e as default}; -//# sourceMappingURL=__vite-browser-external-b25bb000.js.map diff --git a/spaces/DaleChen/AutoGPT/autogpt/permanent_memory/__init__.py b/spaces/DaleChen/AutoGPT/autogpt/permanent_memory/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/Daniton/midjourney-singular/app.py b/spaces/Daniton/midjourney-singular/app.py deleted file mode 100644 index 2193905172b6fb6d868bff88cc8311f491ec13b3..0000000000000000000000000000000000000000 --- a/spaces/Daniton/midjourney-singular/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/prompthero/openjourney").launch() \ No newline at end of file diff --git a/spaces/Datasculptor/DescriptionGPT/detic/data/datasets/register_oid.py b/spaces/Datasculptor/DescriptionGPT/detic/data/datasets/register_oid.py deleted file mode 100644 index bd281f53f07074740b453838ba32f42f81a28383..0000000000000000000000000000000000000000 --- a/spaces/Datasculptor/DescriptionGPT/detic/data/datasets/register_oid.py +++ /dev/null @@ -1,122 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# Modified by Xingyi Zhou from https://github.com/facebookresearch/detectron2/blob/master/detectron2/data/datasets/coco.py -import copy -import io -import logging -import contextlib -import os -import datetime -import json -import numpy as np - -from PIL import Image - -from fvcore.common.timer import Timer -from fvcore.common.file_io import PathManager, file_lock -from detectron2.structures import BoxMode, PolygonMasks, Boxes -from detectron2.data import DatasetCatalog, MetadataCatalog - -logger = logging.getLogger(__name__) - -""" -This file contains functions to register a COCO-format dataset to the DatasetCatalog. -""" - -__all__ = ["register_coco_instances", "register_coco_panoptic_separated"] - - - -def register_oid_instances(name, metadata, json_file, image_root): - """ - """ - # 1. register a function which returns dicts - DatasetCatalog.register(name, lambda: load_coco_json_mem_efficient( - json_file, image_root, name)) - - # 2. Optionally, add metadata about this dataset, - # since they might be useful in evaluation, visualization or logging - MetadataCatalog.get(name).set( - json_file=json_file, image_root=image_root, evaluator_type="oid", **metadata - ) - - -def load_coco_json_mem_efficient(json_file, image_root, dataset_name=None, extra_annotation_keys=None): - """ - Actually not mem efficient - """ - from pycocotools.coco import COCO - - timer = Timer() - json_file = PathManager.get_local_path(json_file) - with contextlib.redirect_stdout(io.StringIO()): - coco_api = COCO(json_file) - if timer.seconds() > 1: - logger.info("Loading {} takes {:.2f} seconds.".format(json_file, timer.seconds())) - - id_map = None - if dataset_name is not None: - meta = MetadataCatalog.get(dataset_name) - cat_ids = sorted(coco_api.getCatIds()) - cats = coco_api.loadCats(cat_ids) - # The categories in a custom json file may not be sorted. - thing_classes = [c["name"] for c in sorted(cats, key=lambda x: x["id"])] - meta.thing_classes = thing_classes - - if not (min(cat_ids) == 1 and max(cat_ids) == len(cat_ids)): - if "coco" not in dataset_name: - logger.warning( - """ - Category ids in annotations are not in [1, #categories]! We'll apply a mapping for you. - """ - ) - id_map = {v: i for i, v in enumerate(cat_ids)} - meta.thing_dataset_id_to_contiguous_id = id_map - - # sort indices for reproducible results - img_ids = sorted(coco_api.imgs.keys()) - imgs = coco_api.loadImgs(img_ids) - logger.info("Loaded {} images in COCO format from {}".format(len(imgs), json_file)) - - dataset_dicts = [] - - ann_keys = ["iscrowd", "bbox", "category_id"] + (extra_annotation_keys or []) - - for img_dict in imgs: - record = {} - record["file_name"] = os.path.join(image_root, img_dict["file_name"]) - record["height"] = img_dict["height"] - record["width"] = img_dict["width"] - image_id = record["image_id"] = img_dict["id"] - anno_dict_list = coco_api.imgToAnns[image_id] - if 'neg_category_ids' in img_dict: - record['neg_category_ids'] = \ - [id_map[x] for x in img_dict['neg_category_ids']] - - objs = [] - for anno in anno_dict_list: - assert anno["image_id"] == image_id - - assert anno.get("ignore", 0) == 0 - - obj = {key: anno[key] for key in ann_keys if key in anno} - - segm = anno.get("segmentation", None) - if segm: # either list[list[float]] or dict(RLE) - if not isinstance(segm, dict): - # filter out invalid polygons (< 3 points) - segm = [poly for poly in segm if len(poly) % 2 == 0 and len(poly) >= 6] - if len(segm) == 0: - num_instances_without_valid_segmentation += 1 - continue # ignore this instance - obj["segmentation"] = segm - - obj["bbox_mode"] = BoxMode.XYWH_ABS - - if id_map: - obj["category_id"] = id_map[obj["category_id"]] - objs.append(obj) - record["annotations"] = objs - dataset_dicts.append(record) - - del coco_api - return dataset_dicts \ No newline at end of file diff --git a/spaces/Demosthene-OR/avr23-cds-translation/tabs/template_save/third_tab.py b/spaces/Demosthene-OR/avr23-cds-translation/tabs/template_save/third_tab.py deleted file mode 100644 index 000d5fc23042ba9463ad3bb47e8b468092070d17..0000000000000000000000000000000000000000 --- a/spaces/Demosthene-OR/avr23-cds-translation/tabs/template_save/third_tab.py +++ /dev/null @@ -1,20 +0,0 @@ -import streamlit as st -import pandas as pd -import numpy as np - - -title = "Data Vizualization" -sidebar_name = "Data Vizualization" - - -def run(): - - st.title(title) - - st.markdown( - """ - This is the third sample tab. - """ - ) - - st.write(pd.DataFrame(np.random.randn(100, 4), columns=list("ABCD"))) diff --git a/spaces/Dinoking/Garbage-Classifier-V6/app.py b/spaces/Dinoking/Garbage-Classifier-V6/app.py deleted file mode 100644 index 834db3bf2f01a727cecc871149b0a73166b2eea2..0000000000000000000000000000000000000000 --- a/spaces/Dinoking/Garbage-Classifier-V6/app.py +++ /dev/null @@ -1,28 +0,0 @@ -import gradio as gr -import tensorflow as tf -import numpy as np -from PIL import Image -import tensorflow.keras as keras -import keras.applications.xception as xception -from tensorflow.keras.models import load_model - -# load model -model = load_model('model804.h5') - -classnames = ['battery','cardboard','clothes','food','glass','medical','metal','paper','plastic','shoes'] - - - -def predict_image(img): - img_4d=img.reshape(-1,320, 320,3) - prediction=model.predict(img_4d)[0] - return {classnames[i]: float(prediction[i]) for i in range(10)} - -image = gr.inputs.Image(shape=(320, 320)) -label = gr.outputs.Label(num_top_classes=3) -enable_queue=True -examples = ['battery.jpg','cardboard.jpeg','clothes.jpeg','glass.jpg','metal.jpg','plastic.jpg','shoes.jpg'] -article="

Made by Aditya Narendra with 🖤

" - -gr.Interface(fn=predict_image, inputs=image, title="Garbage Classifier", - description="This is a Garbage Classification Model Trained using Xception Net on DS11 Mod(Seg10 V4).Deployed to Hugging Faces using Gradio.",outputs=label,article=article,enable_queue=enable_queue,examples=examples,interpretation='default').launch(share="True") \ No newline at end of file diff --git a/spaces/DragGan/DragGan-Inversion/stylegan_human/torch_utils/__init__.py b/spaces/DragGan/DragGan-Inversion/stylegan_human/torch_utils/__init__.py deleted file mode 100644 index a0b0f4efcbe1e3cd4199eeecb043d5afe1548307..0000000000000000000000000000000000000000 --- a/spaces/DragGan/DragGan-Inversion/stylegan_human/torch_utils/__init__.py +++ /dev/null @@ -1,11 +0,0 @@ -# Copyright (c) SenseTime Research. All rights reserved. - -# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -# empty diff --git a/spaces/DucHaiten/webui/app.py b/spaces/DucHaiten/webui/app.py deleted file mode 100644 index 5a08890d6b889c2623b84175d936a4432ede77e7..0000000000000000000000000000000000000000 --- a/spaces/DucHaiten/webui/app.py +++ /dev/null @@ -1,42 +0,0 @@ -import os -from subprocess import getoutput - -gpu_info = getoutput('nvidia-smi') -if("A10G" in gpu_info): - os.system(f"pip install -q https://github.com/camenduru/stable-diffusion-webui-colab/releases/download/0.0.15/xformers-0.0.15.dev0+4c06c79.d20221205-cp38-cp38-linux_x86_64.whl") -elif("T4" in gpu_info): - os.system(f"pip install -q https://github.com/camenduru/stable-diffusion-webui-colab/releases/download/0.0.15/xformers-0.0.15.dev0+1515f77.d20221130-cp38-cp38-linux_x86_64.whl") - -os.system(f"git clone -b v1.5 https://github.com/camenduru/stable-diffusion-webui /home/user/app/stable-diffusion-webui") -os.chdir("/home/user/app/stable-diffusion-webui") - -os.system(f"wget -q https://github.com/camenduru/webui/raw/main/env_patch.py -O /home/user/app/env_patch.py") -os.system(f"sed -i '$a fastapi==0.90.0' /home/user/app/stable-diffusion-webui/requirements_versions.txt") -os.system(f"sed -i -e '/import image_from_url_text/r /home/user/app/env_patch.py' /home/user/app/stable-diffusion-webui/modules/ui.py") -os.system(f"sed -i -e '/(modelmerger_interface, \"Checkpoint Merger\", \"modelmerger\"),/d' /home/user/app/stable-diffusion-webui/modules/ui.py") -os.system(f"sed -i -e '/(train_interface, \"Train\", \"ti\"),/d' /home/user/app/stable-diffusion-webui/modules/ui.py") -os.system(f"sed -i -e '/extensions_interface, \"Extensions\", \"extensions\"/d' /home/user/app/stable-diffusion-webui/modules/ui.py") -os.system(f"sed -i -e '/settings_interface, \"Settings\", \"settings\"/d' /home/user/app/stable-diffusion-webui/modules/ui.py") -os.system(f'''sed -i -e "s/document.getElementsByTagName('gradio-app')\[0\].shadowRoot/!!document.getElementsByTagName('gradio-app')[0].shadowRoot ? document.getElementsByTagName('gradio-app')[0].shadowRoot : document/g" /home/user/app/stable-diffusion-webui/script.js''') -os.system(f"sed -i -e 's/ show_progress=False,/ show_progress=True,/g' /home/user/app/stable-diffusion-webui/modules/ui.py") -os.system(f"sed -i -e 's/shared.demo.launch/shared.demo.queue().launch/g' /home/user/app/stable-diffusion-webui/webui.py") -os.system(f"sed -i -e 's/ outputs=\[/queue=False, &/g' /home/user/app/stable-diffusion-webui/modules/ui.py") -os.system(f"sed -i -e 's/ queue=False, / /g' /home/user/app/stable-diffusion-webui/modules/ui.py") - -# ----------------------------Please duplicate this space and delete this block if you don't want to see the extra header---------------------------- -os.system(f"wget -q https://github.com/camenduru/webui/raw/main/header_patch.py -O /home/user/app/header_patch.py") -os.system(f"sed -i -e '/demo:/r /home/user/app/header_patch.py' /home/user/app/stable-diffusion-webui/modules/ui.py") -# --------------------------------------------------------------------------------------------------------------------------------------------------- - -os.system(f"rm -rfv /home/user/app/stable-diffusion-webui/scripts/") - -os.system(f"wget -q https://github.com/camenduru/webui/raw/main/shared-config.json -O /home/user/app/shared-config.json") -os.system(f"wget -q https://github.com/camenduru/webui/raw/main/shared-ui-config.json -O /home/user/app/shared-ui-config.json") - -# os.system(f"wget -q https://huggingface.co/DucHaiten/DucHaitenAIart/resolve/main/DucHaitenAIart_v2.0.safetensors -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/DucHaitenAIart_v2.0-emaonly.safetensors") -os.system(f"wget -q https://huggingface.co/DucHaiten/DucHaitenDreamWorld/resolve/main/DucHaitenDreamWorld_v2.4.1.safetensors -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/DucHaitenDreamWorld_v2.4.1.safetensors") -os.system(f"wget -q https://huggingface.co/DucHaiten/DucHaitenAnime/resolve/main/DucHaitenAnime_v4.0.safetensors -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/DucHaitenAnime_v4.0.safetensors") -os.system(f"wget -q https://huggingface.co/DucHaiten/DucHaitenAnimated/resolve/main/DucHaitenAnimated_v5.0.safetensors -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/DucHaitenAnimated_v5.0.safetensors") -os.system(f"wget -q https://huggingface.co/DucHaiten/DucHaitenAIart/resolve/main/DucHaitenAIart_v3.1.safetensors -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/DucHaitenAIart_v3.1.safetensors") - -os.system(f"python launch.py --force-enable-xformers --ui-config-file /home/user/app/ui-config.json --ui-settings-file /home/user/app/config.json --disable-console-progressbars --enable-console-prompts --cors-allow-origins huggingface.co,hf.space --no-progressbar-hiding --api") \ No newline at end of file diff --git a/spaces/Duskfallcrew/DreamlikeArt-PhotoReal-2.0/style.css b/spaces/Duskfallcrew/DreamlikeArt-PhotoReal-2.0/style.css deleted file mode 100644 index fdbef9e64cc6b9f8003698ffa38997ee22a640ac..0000000000000000000000000000000000000000 --- a/spaces/Duskfallcrew/DreamlikeArt-PhotoReal-2.0/style.css +++ /dev/null @@ -1,84 +0,0 @@ -#col-container { - max-width: 800px; - margin-left: auto; - margin-right: auto; -} -a { - color: inherit; - text-decoration: underline; -} -.gradio-container { - font-family: 'IBM Plex Sans', sans-serif; -} -.gr-button { - color: white; - border-color: #9d66e5; - background: #9d66e5; -} -input[type='range'] { - accent-color: #9d66e5; -} -.dark input[type='range'] { - accent-color: #dfdfdf; -} -.container { - max-width: 800px; - margin: auto; - padding-top: 1.5rem; -} -#gallery { - min-height: 22rem; - margin-bottom: 15px; - margin-left: auto; - margin-right: auto; - border-bottom-right-radius: .5rem !important; - border-bottom-left-radius: .5rem !important; -} -#gallery>div>.h-full { - min-height: 20rem; -} -.details:hover { - text-decoration: underline; -} -.gr-button { - white-space: nowrap; -} -.gr-button:focus { - border-color: rgb(147 197 253 / var(--tw-border-opacity)); - outline: none; - box-shadow: var(--tw-ring-offset-shadow), var(--tw-ring-shadow), var(--tw-shadow, 0 0 #0000); - --tw-border-opacity: 1; - --tw-ring-offset-shadow: var(--tw-ring-inset) 0 0 0 var(--tw-ring-offset-width) var(--tw-ring-offset-color); - --tw-ring-shadow: var(--tw-ring-inset) 0 0 0 calc(3px var(--tw-ring-offset-width)) var(--tw-ring-color); - --tw-ring-color: rgb(191 219 254 / var(--tw-ring-opacity)); - --tw-ring-opacity: .5; -} -#advanced-options { - margin-bottom: 20px; -} -.footer { - margin-bottom: 45px; - margin-top: 35px; - text-align: center; - border-bottom: 1px solid #e5e5e5; -} -.footer>p { - font-size: .8rem; - display: inline-block; - padding: 0 10px; - transform: translateY(10px); - background: white; -} -.dark .logo{ filter: invert(1); } -.dark .footer { - border-color: #303030; -} -.dark .footer>p { - background: #0b0f19; -} -.acknowledgments h4{ - margin: 1.25em 0 .25em 0; - font-weight: bold; - font-size: 115%; -} - diff --git a/spaces/EPFL-VILAB/MultiMAE/FINETUNING.md b/spaces/EPFL-VILAB/MultiMAE/FINETUNING.md deleted file mode 100644 index f07f794064f8b5a3496f86eddbe05e1030fc5411..0000000000000000000000000000000000000000 --- a/spaces/EPFL-VILAB/MultiMAE/FINETUNING.md +++ /dev/null @@ -1,126 +0,0 @@ -# Fine-tuning - -We provide fine-tuning scripts for classification, semantic segmentation, depth estimation and more. -Please check [SETUP.md](SETUP.md) for set-up instructions first. - -- [General information](#general-information) -- [Classification](#classification) -- [Semantic segmentation](#semantic-segmentation) -- [Depth estimation](#depth-estimation) -- [Taskonomy tasks](#taskonomy-tasks) - -## General information - -### Loading pre-trained models - -All our fine-tuning scripts support models in the MultiMAE / MultiViT format. Pre-trained models using the timm / ViT format can be converted to this format using the [`vit2multimae_converter.py`](tools/vit2multimae_converter.py) - script. More information can be found [here](README.md#model-formats). - -### Modifying configs -The training scripts support both YAML config files and command-line arguments. See [here](cfgs/finetune) for all fine-tuning config files. - -To modify fine-training settings, either edit / add config files or provide additional command-line arguments. - -:information_source: Config files arguments override default arguments, and command-line arguments override both default arguments and config arguments. - -:warning: When changing settings (e.g., using a different pre-trained model), make sure to modify the `output_dir` and `wandb_run_name` (if logging is activated) to reflect the changes. - - -### Experiment logging -To activate logging to [Weights & Biases](https://docs.wandb.ai/), either edit the config files or use the `--log_wandb` flag along with any other extra logging arguments. - - -## Classification - -We use 8 A100 GPUs for classification fine-tuning. Configs can be found [here](cfgs/finetune/cls). - -To fine-tune MultiMAE on ImageNet-1K classification using default settings, run: -```bash -OMP_NUM_THREADS=1 torchrun --nproc_per_node=8 run_finetuning_cls.py \ ---config cfgs/finetune/cls/ft_in1k_100e_multimae-b.yaml \ ---finetune /path/to/multimae_weights \ ---data_path /path/to/in1k/train/rgb \ ---eval_data_path /path/to/in1k/val/rgb -``` - -- For a list of possible arguments, see [`run_finetuning_cls.py`](run_finetuning_cls.py). - -## Semantic segmentation - -We use 4 A100 GPUs for semantic segmentation fine-tuning. Configs can be found [here](cfgs/finetune/semseg). - -### ADE20K -To fine-tune MultiMAE on ADE20K semantic segmentation with default settings and **RGB** as the input modality, run: -```bash -OMP_NUM_THREADS=1 torchrun --nproc_per_node=4 run_finetuning_semseg.py \ ---config cfgs/finetune/semseg/ade/ft_ade_64e_multimae-b_rgb.yaml \ ---finetune /path/to/multimae_weights \ ---data_path /path/to/ade20k/train \ ---eval_data_path /path/to/ade20k/val -``` - -- For a list of possible arguments, see [`run_finetuning_semseg.py`](run_finetuning_semseg.py). - - -### Hypersim -To fine-tune MultiMAE on Hypersim semantic segmentation with default settings and **RGB** as the input modality, run: -```bash -OMP_NUM_THREADS=1 torchrun --nproc_per_node=4 run_finetuning_semseg.py \ ---config cfgs/finetune/semseg/hypersim/ft_hypersim_25e_multimae-b_rgb.yaml \ ---finetune /path/to/multimae_weights \ ---data_path /path/to/hypersim/train \ ---eval_data_path /path/to/hypersim/val -``` - -- To fine-tune using **depth-only** and **RGB + depth** as the input modalities, simply swap the config file to the appropriate one. -- For a list of possible arguments, see [`run_finetuning_semseg.py`](run_finetuning_semseg.py). - - - -### NYUv2 -To fine-tune MultiMAE on NYUv2 semantic segmentation with default settings and **RGB** as the input modality, run: -```bash -OMP_NUM_THREADS=1 torchrun --nproc_per_node=4 run_finetuning_semseg.py \ ---config cfgs/finetune/semseg/nyu/ft_nyu_200e_multimae-b_rgb.yaml \ ---finetune /path/to/multimae_weights \ ---data_path /path/to/nyu/train \ ---eval_data_path /path/to/nyu/test_or_val -``` - -- To fine-tune using **depth-only** and **RGB + depth** as the input modalities, simply swap the config file to the appropriate one. -- For a list of possible arguments, see [`run_finetuning_semseg.py`](run_finetuning_semseg.py). - - -## Depth estimation - -We use 2 A100 GPUs for depth estimation fine-tuning. Configs can be found [here](cfgs/finetune/depth). - - -To fine-tune MultiMAE on NYUv2 depth estimation with default settings, run: -```bash -OMP_NUM_THREADS=1 torchrun --nproc_per_node=2 run_finetuning_depth.py \ ---config cfgs/finetune/depth/ft_nyu_2000e_multimae-b.yaml \ ---finetune /path/to/multimae_weights \ ---data_path /path/to/nyu/train \ ---eval_data_path /path/to/nyu/test_or_val -``` -- For a list of possible arguments, see [`run_finetuning_depth.py`](run_finetuning_depth.py). - -## Taskonomy tasks - -We use 1 A100 GPU to fine-tune on Taskonomy tasks. Configs can be found [here](cfgs/finetune/taskonomy). - -The tasks we support are: Principal curvature, z-buffer depth, texture edges, occlusion edges, 2D keypoints, -3D keypoints, surface normals, and reshading. - - -For example, to fine-tune MultiMAE on Taskonomy reshading with default settings, run: -```bash -OMP_NUM_THREADS=1 torchrun --nproc_per_node=1 run_finetuning_taskonomy.py \ ---config cfgs/finetune/taskonomy/rgb2reshading-1k/ft_rgb2reshading_multimae-b.yaml \ ---finetune /path/to/multimae_weights \ ---data_path /path/to/taskonomy_tiny -``` - -- To fine-tune on a different task, simply swap the config file to the appropriate one. -- For a list of possible arguments, see [`run_finetuning_taskonomy.py`](run_finetuning_taskonomy.py). diff --git a/spaces/Egrt/MaskGAN/utils/__init__.py b/spaces/Egrt/MaskGAN/utils/__init__.py deleted file mode 100644 index 90f60fdd89ad8575faafe45188bd1d968852fc67..0000000000000000000000000000000000000000 --- a/spaces/Egrt/MaskGAN/utils/__init__.py +++ /dev/null @@ -1 +0,0 @@ -from .utils import * \ No newline at end of file diff --git a/spaces/FantasticGNU/AnomalyGPT/header.py b/spaces/FantasticGNU/AnomalyGPT/header.py deleted file mode 100644 index 2e34537c2e988b2cc62e5ebc78197b76130dc51e..0000000000000000000000000000000000000000 --- a/spaces/FantasticGNU/AnomalyGPT/header.py +++ /dev/null @@ -1,35 +0,0 @@ -import torch -import datetime -import types -import deepspeed -from transformers.deepspeed import HfDeepSpeedConfig -import transformers -import numpy as np -from collections import OrderedDict -from torch.utils.data import Dataset, DataLoader -from torch.nn.utils import clip_grad_norm_ -from torch.cuda.amp import autocast, GradScaler -from torch.nn import DataParallel -from torch.optim import lr_scheduler -import torch.optim as optim -import torch.nn as nn -import torch.nn.functional as F -from tqdm import tqdm -import os -import re -import math -import random -import json -import time -import logging -from copy import deepcopy -import ipdb -import argparse -from model.ImageBind import data -from transformers import LlamaTokenizer, LlamaForCausalLM, LlamaConfig -from torch.nn.utils.rnn import pad_sequence -from peft import LoraConfig, TaskType, get_peft_model - -logging.getLogger("transformers").setLevel(logging.WARNING) -logging.getLogger("transformers.tokenization_utils").setLevel(logging.ERROR) -os.environ['TOKENIZERS_PARALLELISM'] = 'false' diff --git a/spaces/Fengbinbin/gpt-academic/request_llm/bridge_tgui.py b/spaces/Fengbinbin/gpt-academic/request_llm/bridge_tgui.py deleted file mode 100644 index fcf852f0474892bd179843ece3f4a83110bd7756..0000000000000000000000000000000000000000 --- a/spaces/Fengbinbin/gpt-academic/request_llm/bridge_tgui.py +++ /dev/null @@ -1,171 +0,0 @@ -''' -Contributed by SagsMug. Modified by binary-husky -https://github.com/oobabooga/text-generation-webui/pull/175 -''' - -import asyncio -import json -import random -import string -import websockets -import logging -import time -import threading -import importlib -from toolbox import get_conf, update_ui - - -def random_hash(): - letters = string.ascii_lowercase + string.digits - return ''.join(random.choice(letters) for i in range(9)) - -async def run(context, max_token, temperature, top_p, addr, port): - params = { - 'max_new_tokens': max_token, - 'do_sample': True, - 'temperature': temperature, - 'top_p': top_p, - 'typical_p': 1, - 'repetition_penalty': 1.05, - 'encoder_repetition_penalty': 1.0, - 'top_k': 0, - 'min_length': 0, - 'no_repeat_ngram_size': 0, - 'num_beams': 1, - 'penalty_alpha': 0, - 'length_penalty': 1, - 'early_stopping': True, - 'seed': -1, - } - session = random_hash() - - async with websockets.connect(f"ws://{addr}:{port}/queue/join") as websocket: - while content := json.loads(await websocket.recv()): - #Python3.10 syntax, replace with if elif on older - if content["msg"] == "send_hash": - await websocket.send(json.dumps({ - "session_hash": session, - "fn_index": 12 - })) - elif content["msg"] == "estimation": - pass - elif content["msg"] == "send_data": - await websocket.send(json.dumps({ - "session_hash": session, - "fn_index": 12, - "data": [ - context, - params['max_new_tokens'], - params['do_sample'], - params['temperature'], - params['top_p'], - params['typical_p'], - params['repetition_penalty'], - params['encoder_repetition_penalty'], - params['top_k'], - params['min_length'], - params['no_repeat_ngram_size'], - params['num_beams'], - params['penalty_alpha'], - params['length_penalty'], - params['early_stopping'], - params['seed'], - ] - })) - elif content["msg"] == "process_starts": - pass - elif content["msg"] in ["process_generating", "process_completed"]: - yield content["output"]["data"][0] - # You can search for your desired end indicator and - # stop generation by closing the websocket here - if (content["msg"] == "process_completed"): - break - - - - - -def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_prompt='', stream = True, additional_fn=None): - """ - 发送至chatGPT,流式获取输出。 - 用于基础的对话功能。 - inputs 是本次问询的输入 - top_p, temperature是chatGPT的内部调优参数 - history 是之前的对话列表(注意无论是inputs还是history,内容太长了都会触发token数量溢出的错误) - chatbot 为WebUI中显示的对话列表,修改它,然后yeild出去,可以直接修改对话界面内容 - additional_fn代表点击的哪个按钮,按钮见functional.py - """ - if additional_fn is not None: - import core_functional - importlib.reload(core_functional) # 热更新prompt - core_functional = core_functional.get_core_functions() - if "PreProcess" in core_functional[additional_fn]: inputs = core_functional[additional_fn]["PreProcess"](inputs) # 获取预处理函数(如果有的话) - inputs = core_functional[additional_fn]["Prefix"] + inputs + core_functional[additional_fn]["Suffix"] - - raw_input = "What I would like to say is the following: " + inputs - history.extend([inputs, ""]) - chatbot.append([inputs, ""]) - yield from update_ui(chatbot=chatbot, history=history, msg="等待响应") # 刷新界面 - - prompt = raw_input - tgui_say = "" - - model_name, addr_port = llm_kwargs['llm_model'].split('@') - assert ':' in addr_port, "LLM_MODEL 格式不正确!" + llm_kwargs['llm_model'] - addr, port = addr_port.split(':') - - - mutable = ["", time.time()] - def run_coorotine(mutable): - async def get_result(mutable): - # "tgui:galactica-1.3b@localhost:7860" - - async for response in run(context=prompt, max_token=llm_kwargs['max_length'], - temperature=llm_kwargs['temperature'], - top_p=llm_kwargs['top_p'], addr=addr, port=port): - print(response[len(mutable[0]):]) - mutable[0] = response - if (time.time() - mutable[1]) > 3: - print('exit when no listener') - break - asyncio.run(get_result(mutable)) - - thread_listen = threading.Thread(target=run_coorotine, args=(mutable,), daemon=True) - thread_listen.start() - - while thread_listen.is_alive(): - time.sleep(1) - mutable[1] = time.time() - # Print intermediate steps - if tgui_say != mutable[0]: - tgui_say = mutable[0] - history[-1] = tgui_say - chatbot[-1] = (history[-2], history[-1]) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - - - - -def predict_no_ui_long_connection(inputs, llm_kwargs, history, sys_prompt, observe_window, console_slience=False): - raw_input = "What I would like to say is the following: " + inputs - prompt = raw_input - tgui_say = "" - model_name, addr_port = llm_kwargs['llm_model'].split('@') - assert ':' in addr_port, "LLM_MODEL 格式不正确!" + llm_kwargs['llm_model'] - addr, port = addr_port.split(':') - - - def run_coorotine(observe_window): - async def get_result(observe_window): - async for response in run(context=prompt, max_token=llm_kwargs['max_length'], - temperature=llm_kwargs['temperature'], - top_p=llm_kwargs['top_p'], addr=addr, port=port): - print(response[len(observe_window[0]):]) - observe_window[0] = response - if (time.time() - observe_window[1]) > 5: - print('exit when no listener') - break - asyncio.run(get_result(observe_window)) - thread_listen = threading.Thread(target=run_coorotine, args=(observe_window,)) - thread_listen.start() - return observe_window[0] diff --git a/spaces/Fernando22/freegpt-webui/g4f/Provider/Providers/Fakeopen.py b/spaces/Fernando22/freegpt-webui/g4f/Provider/Providers/Fakeopen.py deleted file mode 100644 index 5a82bf2cc0736384563332a279f5fbcbb120f676..0000000000000000000000000000000000000000 --- a/spaces/Fernando22/freegpt-webui/g4f/Provider/Providers/Fakeopen.py +++ /dev/null @@ -1,54 +0,0 @@ -import os -import json -import requests -from typing import Dict, get_type_hints - -url = 'https://ai.fakeopen.com/v1/' -model = [ - 'gpt-3.5-turbo', - 'gpt-3.5-turbo-0613', - 'gpt-3.5-turbo-16k', - 'gpt-3.5-turbo-16k-0613', -] - -supports_stream = True -needs_auth = False - - -def _create_completion(model: str, messages: list, stream: bool, **kwargs): - - headers = { - 'Content-Type': 'application/json', - 'accept': 'text/event-stream', - 'Cache-Control': 'no-cache', - 'Proxy-Connection': 'keep-alive', - 'Authorization': f"Bearer {os.environ.get('FAKE_OPEN_KEY', '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')}", - } - - json_data = { - 'messages': messages, - 'temperature': 1.0, - 'model': model, - 'stream': stream, - } - - response = requests.post( - 'https://ai.fakeopen.com/v1/chat/completions', headers=headers, json=json_data, stream=True - ) - - for token in response.iter_lines(): - decoded = token.decode('utf-8') - if decoded == '[DONE]': - break - if decoded.startswith('data: '): - data_str = decoded.replace('data: ', '') - if data_str != '[DONE]': - data = json.loads(data_str) - if 'choices' in data and 'delta' in data['choices'][0] and 'content' in data['choices'][0]['delta']: - yield data['choices'][0]['delta']['content'] - - - - -params = f'g4f.Providers.{os.path.basename(__file__)[:-3]} supports: ' + '(%s)' % ', '.join( - [f"{name}: {get_type_hints(_create_completion)[name].__name__}" for name in _create_completion.__code__.co_varnames[:_create_completion.__code__.co_argcount]]) diff --git a/spaces/FluxWaveCorp/Ghostwriter-Bloom/generators/title_to_abstract.py b/spaces/FluxWaveCorp/Ghostwriter-Bloom/generators/title_to_abstract.py deleted file mode 100644 index a5ff1dda8edc9a75e7befa4d8d7a16efe0722e67..0000000000000000000000000000000000000000 --- a/spaces/FluxWaveCorp/Ghostwriter-Bloom/generators/title_to_abstract.py +++ /dev/null @@ -1,5 +0,0 @@ - -from .model import model - -def title_to_abstract_generator(template): - return model('title', template) diff --git a/spaces/FridaZuley/RVC_HFKawaii/demucs/repitch.py b/spaces/FridaZuley/RVC_HFKawaii/demucs/repitch.py deleted file mode 100644 index 8846ab2d951a024c95067f66a113968500442828..0000000000000000000000000000000000000000 --- a/spaces/FridaZuley/RVC_HFKawaii/demucs/repitch.py +++ /dev/null @@ -1,96 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import io -import random -import subprocess as sp -import tempfile - -import numpy as np -import torch -from scipy.io import wavfile - - -def i16_pcm(wav): - if wav.dtype == np.int16: - return wav - return (wav * 2**15).clamp_(-2**15, 2**15 - 1).short() - - -def f32_pcm(wav): - if wav.dtype == np.float: - return wav - return wav.float() / 2**15 - - -class RepitchedWrapper: - """ - Wrap a dataset to apply online change of pitch / tempo. - """ - def __init__(self, dataset, proba=0.2, max_pitch=2, max_tempo=12, tempo_std=5, vocals=[3]): - self.dataset = dataset - self.proba = proba - self.max_pitch = max_pitch - self.max_tempo = max_tempo - self.tempo_std = tempo_std - self.vocals = vocals - - def __len__(self): - return len(self.dataset) - - def __getitem__(self, index): - streams = self.dataset[index] - in_length = streams.shape[-1] - out_length = int((1 - 0.01 * self.max_tempo) * in_length) - - if random.random() < self.proba: - delta_pitch = random.randint(-self.max_pitch, self.max_pitch) - delta_tempo = random.gauss(0, self.tempo_std) - delta_tempo = min(max(-self.max_tempo, delta_tempo), self.max_tempo) - outs = [] - for idx, stream in enumerate(streams): - stream = repitch( - stream, - delta_pitch, - delta_tempo, - voice=idx in self.vocals) - outs.append(stream[:, :out_length]) - streams = torch.stack(outs) - else: - streams = streams[..., :out_length] - return streams - - -def repitch(wav, pitch, tempo, voice=False, quick=False, samplerate=44100): - """ - tempo is a relative delta in percentage, so tempo=10 means tempo at 110%! - pitch is in semi tones. - Requires `soundstretch` to be installed, see - https://www.surina.net/soundtouch/soundstretch.html - """ - outfile = tempfile.NamedTemporaryFile(suffix=".wav") - in_ = io.BytesIO() - wavfile.write(in_, samplerate, i16_pcm(wav).t().numpy()) - command = [ - "soundstretch", - "stdin", - outfile.name, - f"-pitch={pitch}", - f"-tempo={tempo:.6f}", - ] - if quick: - command += ["-quick"] - if voice: - command += ["-speech"] - try: - sp.run(command, capture_output=True, input=in_.getvalue(), check=True) - except sp.CalledProcessError as error: - raise RuntimeError(f"Could not change bpm because {error.stderr.decode('utf-8')}") - sr, wav = wavfile.read(outfile.name) - wav = wav.copy() - wav = f32_pcm(torch.from_numpy(wav).t()) - assert sr == samplerate - return wav diff --git a/spaces/GenXDad/logo-wizard-logo-diffusion-checkpoint/README.md b/spaces/GenXDad/logo-wizard-logo-diffusion-checkpoint/README.md deleted file mode 100644 index b08db6b6d3aacf01e2070195d8d0357ce9cc40b3..0000000000000000000000000000000000000000 --- a/spaces/GenXDad/logo-wizard-logo-diffusion-checkpoint/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Logo Wizard Logo Diffusion Checkpoint -emoji: 🐢 -colorFrom: yellow -colorTo: pink -sdk: gradio -sdk_version: 3.38.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/hrnet/fcn_hr48_480x480_80k_pascal_context_59.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/hrnet/fcn_hr48_480x480_80k_pascal_context_59.py deleted file mode 100644 index 012ad0a7d6119554ec00400ad18a09249a72eca4..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/hrnet/fcn_hr48_480x480_80k_pascal_context_59.py +++ /dev/null @@ -1,10 +0,0 @@ -_base_ = './fcn_hr18_480x480_80k_pascal_context_59.py' -model = dict( - pretrained='open-mmlab://msra/hrnetv2_w48', - backbone=dict( - extra=dict( - stage2=dict(num_channels=(48, 96)), - stage3=dict(num_channels=(48, 96, 192)), - stage4=dict(num_channels=(48, 96, 192, 384)))), - decode_head=dict( - in_channels=[48, 96, 192, 384], channels=sum([48, 96, 192, 384]))) diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/exp/upernet_global_small/test_config_h32.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/exp/upernet_global_small/test_config_h32.py deleted file mode 100644 index a31e3874f76f9f7b089ac8834d85df2441af9b0e..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_segmentation/exp/upernet_global_small/test_config_h32.py +++ /dev/null @@ -1,39 +0,0 @@ -_base_ = [ - '../../configs/_base_/models/upernet_uniformer.py', - '../../configs/_base_/datasets/ade20k.py', - '../../configs/_base_/default_runtime.py', - '../../configs/_base_/schedules/schedule_160k.py' -] -model = dict( - backbone=dict( - type='UniFormer', - embed_dim=[64, 128, 320, 512], - layers=[3, 4, 8, 3], - head_dim=64, - drop_path_rate=0.25, - windows=False, - hybrid=True, - window_size=32 - ), - decode_head=dict( - in_channels=[64, 128, 320, 512], - num_classes=150 - ), - auxiliary_head=dict( - in_channels=320, - num_classes=150 - )) - -# AdamW optimizer, no weight decay for position embedding & layer norm in backbone -optimizer = dict(_delete_=True, type='AdamW', lr=0.00006, betas=(0.9, 0.999), weight_decay=0.01, - paramwise_cfg=dict(custom_keys={'absolute_pos_embed': dict(decay_mult=0.), - 'relative_position_bias_table': dict(decay_mult=0.), - 'norm': dict(decay_mult=0.)})) - -lr_config = dict(_delete_=True, policy='poly', - warmup='linear', - warmup_iters=1500, - warmup_ratio=1e-6, - power=1.0, min_lr=0.0, by_epoch=False) - -data=dict(samples_per_gpu=2) \ No newline at end of file diff --git a/spaces/GrandaddyShmax/MusicGen_Plus/audiocraft/modules/conditioners.py b/spaces/GrandaddyShmax/MusicGen_Plus/audiocraft/modules/conditioners.py deleted file mode 100644 index 82792316024b88d4c5c38b0a28f443627771d509..0000000000000000000000000000000000000000 --- a/spaces/GrandaddyShmax/MusicGen_Plus/audiocraft/modules/conditioners.py +++ /dev/null @@ -1,990 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -from collections import defaultdict -from copy import deepcopy -from dataclasses import dataclass, field -from itertools import chain -import logging -import math -import random -import re -import typing as tp -import warnings - -from einops import rearrange -from num2words import num2words -import spacy -from transformers import T5EncoderModel, T5Tokenizer # type: ignore -import torchaudio -import torch -from torch import nn -from torch import Tensor -import torch.nn.functional as F -from torch.nn.utils.rnn import pad_sequence - -from .streaming import StreamingModule -from .transformer import create_sin_embedding -from ..data.audio_dataset import SegmentInfo -from ..utils.autocast import TorchAutocast -from ..utils.utils import hash_trick, length_to_mask, collate - - -logger = logging.getLogger(__name__) -TextCondition = tp.Optional[str] # a text condition can be a string or None (if doesn't exist) -ConditionType = tp.Tuple[Tensor, Tensor] # condition, mask - - -class WavCondition(tp.NamedTuple): - wav: Tensor - length: Tensor - path: tp.List[tp.Optional[str]] = [] - - -def nullify_condition(condition: ConditionType, dim: int = 1): - """This function transforms an input condition to a null condition. - The way it is done by converting it to a single zero vector similarly - to how it is done inside WhiteSpaceTokenizer and NoopTokenizer. - - Args: - condition (ConditionType): a tuple of condition and mask (tp.Tuple[Tensor, Tensor]) - dim (int): the dimension that will be truncated (should be the time dimension) - WARNING!: dim should not be the batch dimension! - Returns: - ConditionType: a tuple of null condition and mask - """ - assert dim != 0, "dim cannot be the batch dimension!" - assert type(condition) == tuple and \ - type(condition[0]) == Tensor and \ - type(condition[1]) == Tensor, "'nullify_condition' got an unexpected input type!" - cond, mask = condition - B = cond.shape[0] - last_dim = cond.dim() - 1 - out = cond.transpose(dim, last_dim) - out = 0. * out[..., :1] - out = out.transpose(dim, last_dim) - mask = torch.zeros((B, 1), device=out.device).int() - assert cond.dim() == out.dim() - return out, mask - - -def nullify_wav(wav: Tensor) -> WavCondition: - """Create a nullified WavCondition from a wav tensor with appropriate shape. - - Args: - wav (Tensor): tensor of shape [B, T] - Returns: - WavCondition: wav condition with nullified wav. - """ - null_wav, _ = nullify_condition((wav, torch.zeros_like(wav)), dim=wav.dim() - 1) - return WavCondition( - wav=null_wav, - length=torch.tensor([0] * wav.shape[0], device=wav.device), - path=['null_wav'] * wav.shape[0] - ) - - -@dataclass -class ConditioningAttributes: - text: tp.Dict[str, tp.Optional[str]] = field(default_factory=dict) - wav: tp.Dict[str, WavCondition] = field(default_factory=dict) - - def __getitem__(self, item): - return getattr(self, item) - - @property - def text_attributes(self): - return self.text.keys() - - @property - def wav_attributes(self): - return self.wav.keys() - - @property - def attributes(self): - return {"text": self.text_attributes, "wav": self.wav_attributes} - - def to_flat_dict(self): - return { - **{f"text.{k}": v for k, v in self.text.items()}, - **{f"wav.{k}": v for k, v in self.wav.items()}, - } - - @classmethod - def from_flat_dict(cls, x): - out = cls() - for k, v in x.items(): - kind, att = k.split(".") - out[kind][att] = v - return out - - -class SegmentWithAttributes(SegmentInfo): - """Base class for all dataclasses that are used for conditioning. - All child classes should implement `to_condition_attributes` that converts - the existing attributes to a dataclass of type ConditioningAttributes. - """ - def to_condition_attributes(self) -> ConditioningAttributes: - raise NotImplementedError() - - -class Tokenizer: - """Base class for all tokenizers - (in case we want to introduce more advances tokenizers in the future). - """ - def __call__(self, texts: tp.List[tp.Optional[str]]) -> tp.Tuple[Tensor, Tensor]: - raise NotImplementedError() - - -class WhiteSpaceTokenizer(Tokenizer): - """This tokenizer should be used for natural language descriptions. - For example: - ["he didn't, know he's going home.", 'shorter sentence'] => - [[78, 62, 31, 4, 78, 25, 19, 34], - [59, 77, 0, 0, 0, 0, 0, 0]] - """ - PUNCTUATIONS = "?:!.,;" - - def __init__(self, n_bins: int, pad_idx: int = 0, language: str = "en_core_web_sm", - lemma: bool = True, stopwords: bool = True) -> None: - self.n_bins = n_bins - self.pad_idx = pad_idx - self.lemma = lemma - self.stopwords = stopwords - try: - self.nlp = spacy.load(language) - except IOError: - spacy.cli.download(language) # type: ignore - self.nlp = spacy.load(language) - - @tp.no_type_check - def __call__( - self, - texts: tp.List[tp.Optional[str]], - return_text: bool = False - ) -> tp.Tuple[Tensor, Tensor]: - """Take a list of strings and convert them to a tensor of indices. - - Args: - texts (tp.List[str]): List of strings. - return_text (bool, optional): Whether to return text as additional tuple item. Defaults to False. - Returns: - tp.Tuple[Tensor, Tensor]: - - Indices of words in the LUT. - - And a mask indicating where the padding tokens are - """ - output, lengths = [], [] - texts = deepcopy(texts) - for i, text in enumerate(texts): - # if current sample doesn't have a certain attribute, replace with pad token - if text is None: - output.append(Tensor([self.pad_idx])) - lengths.append(0) - continue - - # convert numbers to words - text = re.sub(r"(\d+)", lambda x: num2words(int(x.group(0))), text) # type: ignore - # normalize text - text = self.nlp(text) # type: ignore - # remove stopwords - if self.stopwords: - text = [w for w in text if not w.is_stop] # type: ignore - # remove punctuations - text = [w for w in text if w.text not in self.PUNCTUATIONS] # type: ignore - # lemmatize if needed - text = [getattr(t, "lemma_" if self.lemma else "text") for t in text] # type: ignore - - texts[i] = " ".join(text) - lengths.append(len(text)) - # convert to tensor - tokens = Tensor([hash_trick(w, self.n_bins) for w in text]) - output.append(tokens) - - mask = length_to_mask(torch.IntTensor(lengths)).int() - padded_output = pad_sequence(output, padding_value=self.pad_idx).int().t() - if return_text: - return padded_output, mask, texts # type: ignore - return padded_output, mask - - -class NoopTokenizer(Tokenizer): - """This tokenizer should be used for global conditioners such as: artist, genre, key, etc. - The difference between this and WhiteSpaceTokenizer is that NoopTokenizer does not split - strings, so "Jeff Buckley" will get it's own index. Whereas WhiteSpaceTokenizer will - split it to ["Jeff", "Buckley"] and return an index per word. - - For example: - ["Queen", "ABBA", "Jeff Buckley"] => [43, 55, 101] - ["Metal", "Rock", "Classical"] => [0, 223, 51] - """ - def __init__(self, n_bins: int, pad_idx: int = 0): - self.n_bins = n_bins - self.pad_idx = pad_idx - - def __call__(self, texts: tp.List[tp.Optional[str]]) -> tp.Tuple[Tensor, Tensor]: - output, lengths = [], [] - for text in texts: - # if current sample doesn't have a certain attribute, replace with pad token - if text is None: - output.append(self.pad_idx) - lengths.append(0) - else: - output.append(hash_trick(text, self.n_bins)) - lengths.append(1) - - tokens = torch.LongTensor(output).unsqueeze(1) - mask = length_to_mask(torch.IntTensor(lengths)).int() - return tokens, mask - - -class BaseConditioner(nn.Module): - """Base model for all conditioner modules. We allow the output dim to be different - than the hidden dim for two reasons: 1) keep our LUTs small when the vocab is large; - 2) make all condition dims consistent. - - Args: - dim (int): Hidden dim of the model (text-encoder/LUT). - output_dim (int): Output dim of the conditioner. - """ - def __init__(self, dim, output_dim): - super().__init__() - self.dim = dim - self.output_dim = output_dim - self.output_proj = nn.Linear(dim, output_dim) - - def tokenize(self, *args, **kwargs) -> tp.Any: - """Should be any part of the processing that will lead to a synchronization - point, e.g. BPE tokenization with transfer to the GPU. - - The returned value will be saved and return later when calling forward(). - """ - raise NotImplementedError() - - def forward(self, inputs: tp.Any) -> ConditionType: - """Gets input that should be used as conditioning (e.g, genre, description or a waveform). - Outputs a ConditionType, after the input data was embedded as a dense vector. - - Returns: - ConditionType: - - A tensor of size [B, T, D] where B is the batch size, T is the length of the - output embedding and D is the dimension of the embedding. - - And a mask indicating where the padding tokens. - """ - raise NotImplementedError() - - -class TextConditioner(BaseConditioner): - ... - - -class LUTConditioner(TextConditioner): - """Lookup table TextConditioner. - - Args: - n_bins (int): Number of bins. - dim (int): Hidden dim of the model (text-encoder/LUT). - output_dim (int): Output dim of the conditioner. - tokenizer (str): Name of the tokenizer. - pad_idx (int, optional): Index for padding token. Defaults to 0. - """ - def __init__(self, n_bins: int, dim: int, output_dim: int, tokenizer: str, pad_idx: int = 0): - super().__init__(dim, output_dim) - self.embed = nn.Embedding(n_bins, dim) - self.tokenizer: Tokenizer - if tokenizer == "whitespace": - self.tokenizer = WhiteSpaceTokenizer(n_bins, pad_idx=pad_idx) - elif tokenizer == "noop": - self.tokenizer = NoopTokenizer(n_bins, pad_idx=pad_idx) - else: - raise ValueError(f"unrecognized tokenizer `{tokenizer}`.") - - def tokenize(self, x: tp.List[tp.Optional[str]]) -> tp.Tuple[torch.Tensor, torch.Tensor]: - device = self.embed.weight.device - tokens, mask = self.tokenizer(x) - tokens, mask = tokens.to(device), mask.to(device) - return tokens, mask - - def forward(self, inputs: tp.Tuple[torch.Tensor, torch.Tensor]) -> ConditionType: - tokens, mask = inputs - embeds = self.embed(tokens) - embeds = self.output_proj(embeds) - embeds = (embeds * mask.unsqueeze(-1)) - return embeds, mask - - -class T5Conditioner(TextConditioner): - """T5-based TextConditioner. - - Args: - name (str): Name of the T5 model. - output_dim (int): Output dim of the conditioner. - finetune (bool): Whether to fine-tune T5 at train time. - device (str): Device for T5 Conditioner. - autocast_dtype (tp.Optional[str], optional): Autocast dtype. - word_dropout (float, optional): Word dropout probability. - normalize_text (bool, optional): Whether to apply text normalization. - """ - MODELS = ["t5-small", "t5-base", "t5-large", "t5-3b", "t5-11b", - "google/flan-t5-small", "google/flan-t5-base", "google/flan-t5-large", - "google/flan-t5-xl", "google/flan-t5-xxl"] - MODELS_DIMS = { - "t5-small": 512, - "t5-base": 768, - "t5-large": 1024, - "t5-3b": 1024, - "t5-11b": 1024, - "google/flan-t5-small": 512, - "google/flan-t5-base": 768, - "google/flan-t5-large": 1024, - "google/flan-t5-3b": 1024, - "google/flan-t5-11b": 1024, - } - - def __init__(self, name: str, output_dim: int, finetune: bool, device: str, - autocast_dtype: tp.Optional[str] = 'float32', word_dropout: float = 0., - normalize_text: bool = False): - assert name in self.MODELS, f"unrecognized t5 model name (should in {self.MODELS})" - super().__init__(self.MODELS_DIMS[name], output_dim) - self.device = device - self.name = name - self.finetune = finetune - self.word_dropout = word_dropout - - if autocast_dtype is None or self.device == 'cpu': - self.autocast = TorchAutocast(enabled=False) - if self.device != 'cpu': - logger.warning("T5 has no autocast, this might lead to NaN") - else: - dtype = getattr(torch, autocast_dtype) - assert isinstance(dtype, torch.dtype) - logger.info(f"T5 will be evaluated with autocast as {autocast_dtype}") - self.autocast = TorchAutocast(enabled=True, device_type=self.device, dtype=dtype) - # Let's disable logging temporarily because T5 will vomit some errors otherwise. - # thanks https://gist.github.com/simon-weber/7853144 - previous_level = logging.root.manager.disable - logging.disable(logging.ERROR) - with warnings.catch_warnings(): - warnings.simplefilter("ignore") - try: - self.t5_tokenizer = T5Tokenizer.from_pretrained(name) - t5 = T5EncoderModel.from_pretrained(name).train(mode=finetune) - finally: - logging.disable(previous_level) - if finetune: - self.t5 = t5 - else: - # this makes sure that the t5 models is not part - # of the saved checkpoint - self.__dict__["t5"] = t5.to(device) - - self.normalize_text = normalize_text - if normalize_text: - self.text_normalizer = WhiteSpaceTokenizer(1, lemma=True, stopwords=True) - - def tokenize(self, x: tp.List[tp.Optional[str]]) -> tp.Dict[str, torch.Tensor]: - # if current sample doesn't have a certain attribute, replace with empty string - entries: tp.List[str] = [xi if xi is not None else "" for xi in x] - if self.normalize_text: - _, _, entries = self.text_normalizer(entries, return_text=True) - if self.word_dropout > 0. and self.training: - new_entries = [] - for entry in entries: - words = [word for word in entry.split(" ") if random.random() >= self.word_dropout] - new_entries.append(" ".join(words)) - entries = new_entries - - empty_idx = torch.LongTensor([i for i, xi in enumerate(entries) if xi == ""]) - - inputs = self.t5_tokenizer(entries, return_tensors="pt", padding=True).to(self.device) - mask = inputs["attention_mask"] - mask[empty_idx, :] = 0 # zero-out index where the input is non-existant - return inputs - - def forward(self, inputs: tp.Dict[str, torch.Tensor]) -> ConditionType: - mask = inputs["attention_mask"] - with torch.set_grad_enabled(self.finetune), self.autocast: - embeds = self.t5(**inputs).last_hidden_state - embeds = self.output_proj(embeds.to(self.output_proj.weight)) - embeds = (embeds * mask.unsqueeze(-1)) - return embeds, mask - - -class WaveformConditioner(BaseConditioner): - """Base class for all conditioners that take a waveform as input. - Classes that inherit must implement `_get_wav_embedding` that outputs - a continuous tensor, and `_downsampling_factor` that returns the down-sampling - factor of the embedding model. - - Args: - dim (int): The internal representation dimension. - output_dim (int): Output dimension. - device (tp.Union[torch.device, str]): Device. - """ - def __init__(self, dim: int, output_dim: int, device: tp.Union[torch.device, str]): - super().__init__(dim, output_dim) - self.device = device - - def tokenize(self, wav_length: WavCondition) -> WavCondition: - wav, length, path = wav_length - assert length is not None - return WavCondition(wav.to(self.device), length.to(self.device), path) - - def _get_wav_embedding(self, wav: Tensor) -> Tensor: - """Gets as input a wav and returns a dense vector of conditions.""" - raise NotImplementedError() - - def _downsampling_factor(self): - """Returns the downsampling factor of the embedding model.""" - raise NotImplementedError() - - def forward(self, inputs: WavCondition) -> ConditionType: - """ - Args: - input (WavCondition): Tuple of (waveform, lengths). - Returns: - ConditionType: Dense vector representing the conditioning along with its' mask. - """ - wav, lengths, path = inputs - with torch.no_grad(): - embeds = self._get_wav_embedding(wav) - embeds = embeds.to(self.output_proj.weight) - embeds = self.output_proj(embeds) - - if lengths is not None: - lengths = lengths / self._downsampling_factor() - mask = length_to_mask(lengths, max_len=embeds.shape[1]).int() # type: ignore - else: - mask = torch.ones_like(embeds) - embeds = (embeds * mask.unsqueeze(2).to(self.device)) - - return embeds, mask - - -class ChromaStemConditioner(WaveformConditioner): - """Chroma conditioner that uses DEMUCS to first filter out drums and bass. The is followed by - the insight the drums and bass often dominate the chroma, leading to the chroma not containing the - information about melody. - - Args: - output_dim (int): Output dimension for the conditioner. - sample_rate (int): Sample rate for the chroma extractor. - n_chroma (int): Number of chroma for the chroma extractor. - radix2_exp (int): Radix2 exponent for the chroma extractor. - duration (float): Duration used during training. This is later used for correct padding - in case we are using chroma as prefix. - match_len_on_eval (bool, optional): If True then all chromas are padded to the training - duration. Defaults to False. - eval_wavs (str, optional): Path to a json egg with waveform, this waveforms are used as - conditions during eval (for cases where we don't want to leak test conditions like MusicCaps). - Defaults to None. - n_eval_wavs (int, optional): Limits the number of waveforms used for conditioning. Defaults to 0. - device (tp.Union[torch.device, str], optional): Device for the conditioner. - **kwargs: Additional parameters for the chroma extractor. - """ - def __init__(self, output_dim: int, sample_rate: int, n_chroma: int, radix2_exp: int, - duration: float, match_len_on_eval: bool = True, eval_wavs: tp.Optional[str] = None, - n_eval_wavs: int = 0, device: tp.Union[torch.device, str] = "cpu", **kwargs): - from demucs import pretrained - super().__init__(dim=n_chroma, output_dim=output_dim, device=device) - self.autocast = TorchAutocast(enabled=device != "cpu", device_type=self.device, dtype=torch.float32) - self.sample_rate = sample_rate - self.match_len_on_eval = match_len_on_eval - self.duration = duration - self.__dict__["demucs"] = pretrained.get_model('htdemucs').to(device) - self.stem2idx = {'drums': 0, 'bass': 1, 'other': 2, 'vocal': 3} - self.stem_idx = torch.LongTensor([self.stem2idx['vocal'], self.stem2idx['other']]).to(device) - self.chroma = ChromaExtractor(sample_rate=sample_rate, n_chroma=n_chroma, radix2_exp=radix2_exp, - device=device, **kwargs) - self.chroma_len = self._get_chroma_len() - - def _downsampling_factor(self): - return self.chroma.winhop - - def _get_chroma_len(self): - """Get length of chroma during training""" - dummy_wav = torch.zeros((1, self.sample_rate * self.duration), device=self.device) - dummy_chr = self.chroma(dummy_wav) - return dummy_chr.shape[1] - - @torch.no_grad() - def _get_filtered_wav(self, wav): - from demucs.apply import apply_model - from demucs.audio import convert_audio - with self.autocast: - wav = convert_audio(wav, self.sample_rate, self.demucs.samplerate, self.demucs.audio_channels) - stems = apply_model(self.demucs, wav, device=self.device) - stems = stems[:, self.stem_idx] # extract stem - stems = stems.sum(1) # merge extracted stems - stems = stems.mean(1, keepdim=True) # mono - stems = convert_audio(stems, self.demucs.samplerate, self.sample_rate, 1) - return stems - - @torch.no_grad() - def _get_wav_embedding(self, wav): - # avoid 0-size tensors when we are working with null conds - if wav.shape[-1] == 1: - return self.chroma(wav) - stems = self._get_filtered_wav(wav) - chroma = self.chroma(stems) - - if self.match_len_on_eval: - b, t, c = chroma.shape - if t > self.chroma_len: - chroma = chroma[:, :self.chroma_len] - logger.debug(f'chroma was truncated! ({t} -> {chroma.shape[1]})') - elif t < self.chroma_len: - # chroma = F.pad(chroma, (0, 0, 0, self.chroma_len - t)) - n_repeat = int(math.ceil(self.chroma_len / t)) - chroma = chroma.repeat(1, n_repeat, 1) - chroma = chroma[:, :self.chroma_len] - logger.debug(f'chroma was zero-padded! ({t} -> {chroma.shape[1]})') - return chroma - - -class ChromaExtractor(nn.Module): - """Chroma extraction class, handles chroma extraction and quantization. - - Args: - sample_rate (int): Sample rate. - n_chroma (int): Number of chroma to consider. - radix2_exp (int): Radix2 exponent. - nfft (tp.Optional[int], optional): Number of FFT. - winlen (tp.Optional[int], optional): Window length. - winhop (tp.Optional[int], optional): Window hop size. - argmax (bool, optional): Whether to use argmax. Defaults to False. - norm (float, optional): Norm for chroma normalization. Defaults to inf. - device (tp.Union[torch.device, str], optional): Device to use. Defaults to cpu. - """ - def __init__(self, sample_rate: int, n_chroma: int = 12, radix2_exp: int = 12, - nfft: tp.Optional[int] = None, winlen: tp.Optional[int] = None, winhop: tp.Optional[int] = None, - argmax: bool = False, norm: float = torch.inf, device: tp.Union[torch.device, str] = "cpu"): - super().__init__() - from librosa import filters - self.device = device - self.autocast = TorchAutocast(enabled=device != "cpu", device_type=self.device, dtype=torch.float32) - self.winlen = winlen or 2 ** radix2_exp - self.nfft = nfft or self.winlen - self.winhop = winhop or (self.winlen // 4) - self.sr = sample_rate - self.n_chroma = n_chroma - self.norm = norm - self.argmax = argmax - self.window = torch.hann_window(self.winlen).to(device) - self.fbanks = torch.from_numpy(filters.chroma(sr=sample_rate, n_fft=self.nfft, tuning=0, - n_chroma=self.n_chroma)).to(device) - self.spec = torchaudio.transforms.Spectrogram(n_fft=self.nfft, win_length=self.winlen, - hop_length=self.winhop, power=2, center=True, - pad=0, normalized=True).to(device) - - def forward(self, wav): - with self.autocast: - T = wav.shape[-1] - # in case we are getting a wav that was dropped out (nullified) - # make sure wav length is no less that nfft - if T < self.nfft: - pad = self.nfft - T - r = 0 if pad % 2 == 0 else 1 - wav = F.pad(wav, (pad // 2, pad // 2 + r), 'constant', 0) - assert wav.shape[-1] == self.nfft, f'expected len {self.nfft} but got {wav.shape[-1]}' - spec = self.spec(wav).squeeze(1) - raw_chroma = torch.einsum("cf,...ft->...ct", self.fbanks, spec) - norm_chroma = torch.nn.functional.normalize(raw_chroma, p=self.norm, dim=-2, eps=1e-6) - norm_chroma = rearrange(norm_chroma, "b d t -> b t d") - - if self.argmax: - idx = norm_chroma.argmax(-1, keepdims=True) - norm_chroma[:] = 0 - norm_chroma.scatter_(dim=-1, index=idx, value=1) - - return norm_chroma - - -def dropout_condition(sample: ConditioningAttributes, condition_type: str, condition: str): - """Utility function for nullifying an attribute inside an ConditioningAttributes object. - If the condition is of type "wav", then nullify it using "nullify_condition". - If the condition is of any other type, set its' value to None. - Works in-place. - """ - if condition_type not in ["text", "wav"]: - raise ValueError( - "dropout_condition got an unexpected condition type!" - f" expected 'wav' or 'text' but got '{condition_type}'" - ) - - if condition not in getattr(sample, condition_type): - raise ValueError( - "dropout_condition received an unexpected condition!" - f" expected wav={sample.wav.keys()} and text={sample.text.keys()}" - f"but got '{condition}' of type '{condition_type}'!" - ) - - if condition_type == "wav": - wav, length, path = sample.wav[condition] - sample.wav[condition] = nullify_wav(wav) - else: - sample.text[condition] = None - - return sample - - -class DropoutModule(nn.Module): - """Base class for all dropout modules.""" - def __init__(self, seed: int = 1234): - super().__init__() - self.rng = torch.Generator() - self.rng.manual_seed(seed) - - -class AttributeDropout(DropoutModule): - """Applies dropout with a given probability per attribute. This is different from the behavior of - ClassifierFreeGuidanceDropout as this allows for attributes to be dropped out separately. For example, - "artist" can be dropped while "genre" remains. This is in contrast to ClassifierFreeGuidanceDropout - where if "artist" is dropped "genre" must also be dropped. - - Args: - p (tp.Dict[str, float]): A dict mapping between attributes and dropout probability. For example: - ... - "genre": 0.1, - "artist": 0.5, - "wav": 0.25, - ... - active_on_eval (bool, optional): Whether the dropout is active at eval. Default to False. - seed (int, optional): Random seed. - """ - def __init__(self, p: tp.Dict[str, tp.Dict[str, float]], active_on_eval: bool = False, seed: int = 1234): - super().__init__(seed=seed) - self.active_on_eval = active_on_eval - # construct dict that return the values from p otherwise 0 - self.p = {} - for condition_type, probs in p.items(): - self.p[condition_type] = defaultdict(lambda: 0, probs) - - def forward(self, samples: tp.List[ConditioningAttributes]) -> tp.List[ConditioningAttributes]: - """ - Args: - samples (tp.List[ConditioningAttributes]): List of conditions. - Returns: - tp.List[ConditioningAttributes]: List of conditions after certain attributes were set to None. - """ - if not self.training and not self.active_on_eval: - return samples - - samples = deepcopy(samples) - - for condition_type, ps in self.p.items(): # for condition types [text, wav] - for condition, p in ps.items(): # for attributes of each type (e.g., [artist, genre]) - if torch.rand(1, generator=self.rng).item() < p: - for sample in samples: - dropout_condition(sample, condition_type, condition) - - return samples - - def __repr__(self): - return f"AttributeDropout({dict(self.p)})" - - -class ClassifierFreeGuidanceDropout(DropoutModule): - """Applies Classifier Free Guidance dropout, meaning all attributes - are dropped with the same probability. - - Args: - p (float): Probability to apply condition dropout during training. - seed (int): Random seed. - """ - def __init__(self, p: float, seed: int = 1234): - super().__init__(seed=seed) - self.p = p - - def forward(self, samples: tp.List[ConditioningAttributes]) -> tp.List[ConditioningAttributes]: - """ - Args: - samples (tp.List[ConditioningAttributes]): List of conditions. - Returns: - tp.List[ConditioningAttributes]: List of conditions after all attributes were set to None. - """ - if not self.training: - return samples - - # decide on which attributes to drop in a batched fashion - drop = torch.rand(1, generator=self.rng).item() < self.p - if not drop: - return samples - - # nullify conditions of all attributes - samples = deepcopy(samples) - - for condition_type in ["wav", "text"]: - for sample in samples: - for condition in sample.attributes[condition_type]: - dropout_condition(sample, condition_type, condition) - - return samples - - def __repr__(self): - return f"ClassifierFreeGuidanceDropout(p={self.p})" - - -class ConditioningProvider(nn.Module): - """Main class to provide conditions given all the supported conditioners. - - Args: - conditioners (dict): Dictionary of conditioners. - merge_text_conditions_p (float, optional): Probability to merge all text sources - into a single text condition. Defaults to 0. - drop_desc_p (float, optional): Probability to drop the original description - when merging all text sources into a single text condition. Defaults to 0. - device (tp.Union[torch.device, str], optional): Device for conditioners and output condition types. - """ - def __init__( - self, - conditioners: tp.Dict[str, BaseConditioner], - merge_text_conditions_p: float = 0, - drop_desc_p: float = 0, - device: tp.Union[torch.device, str] = "cpu", - ): - super().__init__() - self.device = device - self.merge_text_conditions_p = merge_text_conditions_p - self.drop_desc_p = drop_desc_p - self.conditioners = nn.ModuleDict(conditioners) - - @property - def text_conditions(self): - return [k for k, v in self.conditioners.items() if isinstance(v, TextConditioner)] - - @property - def wav_conditions(self): - return [k for k, v in self.conditioners.items() if isinstance(v, WaveformConditioner)] - - @property - def has_wav_condition(self): - return len(self.wav_conditions) > 0 - - def tokenize(self, inputs: tp.List[ConditioningAttributes]) -> tp.Dict[str, tp.Any]: - """Match attributes/wavs with existing conditioners in self, and compute tokenize them accordingly. - This should be called before starting any real GPU work to avoid synchronization points. - This will return a dict matching conditioner names to their arbitrary tokenized representations. - - Args: - inputs (list[ConditioningAttribres]): List of ConditioningAttributes objects containing - text and wav conditions. - """ - assert all([type(x) == ConditioningAttributes for x in inputs]), \ - "got unexpected types input for conditioner! should be tp.List[ConditioningAttributes]" \ - f" but types were {set([type(x) for x in inputs])}" - - output = {} - text = self._collate_text(inputs) - wavs = self._collate_wavs(inputs) - - assert set(text.keys() | wavs.keys()).issubset(set(self.conditioners.keys())), \ - f"got an unexpected attribute! Expected {self.conditioners.keys()}, got {text.keys(), wavs.keys()}" - - for attribute, batch in chain(text.items(), wavs.items()): - output[attribute] = self.conditioners[attribute].tokenize(batch) - return output - - def forward(self, tokenized: tp.Dict[str, tp.Any]) -> tp.Dict[str, ConditionType]: - """Compute pairs of `(embedding, mask)` using the configured conditioners - and the tokenized representations. The output is for example: - - { - "genre": (torch.Tensor([B, 1, D_genre]), torch.Tensor([B, 1])), - "description": (torch.Tensor([B, T_desc, D_desc]), torch.Tensor([B, T_desc])), - ... - } - - Args: - tokenized (dict): Dict of tokenized representations as returned by `tokenize()`. - """ - output = {} - for attribute, inputs in tokenized.items(): - condition, mask = self.conditioners[attribute](inputs) - output[attribute] = (condition, mask) - return output - - def _collate_text(self, samples: tp.List[ConditioningAttributes]) -> tp.Dict[str, tp.List[tp.Optional[str]]]: - """Given a list of ConditioningAttributes objects, compile a dictionary where the keys - are the attributes and the values are the aggregated input per attribute. - For example: - Input: - [ - ConditioningAttributes(text={"genre": "Rock", "description": "A rock song with a guitar solo"}, wav=...), - ConditioningAttributes(text={"genre": "Hip-hop", "description": "A hip-hop verse"}, wav=...), - ] - Output: - { - "genre": ["Rock", "Hip-hop"], - "description": ["A rock song with a guitar solo", "A hip-hop verse"] - } - """ - batch_per_attribute: tp.Dict[str, tp.List[tp.Optional[str]]] = defaultdict(list) - - def _merge_conds(cond, merge_text_conditions_p=0, drop_desc_p=0): - def is_valid(k, v): - k_valid = k in ['key', 'bpm', 'genre', 'moods', 'instrument'] - v_valid = v is not None and isinstance(v, (int, float, str, list)) - return k_valid and v_valid - - def process_value(v): - if isinstance(v, (int, float, str)): - return v - if isinstance(v, list): - return ", ".join(v) - else: - RuntimeError(f"unknown type for text value! ({type(v), v})") - - desc = cond.text['description'] - meta_data = "" - if random.uniform(0, 1) < merge_text_conditions_p: - meta_pairs = [f'{k}: {process_value(v)}' for k, v in cond.text.items() if is_valid(k, v)] - random.shuffle(meta_pairs) - meta_data = ". ".join(meta_pairs) - desc = desc if not random.uniform(0, 1) < drop_desc_p else None - - if desc is None: - desc = meta_data if len(meta_data) > 1 else None - else: - desc = desc.rstrip('.') + ". " + meta_data - cond.text['description'] = desc.strip() if desc else None - - if self.training and self.merge_text_conditions_p: - for sample in samples: - _merge_conds(sample, self.merge_text_conditions_p, self.drop_desc_p) - - texts = [x.text for x in samples] - for text in texts: - for condition in self.text_conditions: - batch_per_attribute[condition].append(text[condition]) - - return batch_per_attribute - - def _collate_wavs(self, samples: tp.List[ConditioningAttributes]): - """Generate a dict where the keys are attributes by which we fetch similar wavs, - and the values are Tensors of wavs according to said attribtues. - - *Note*: by the time the samples reach this function, each sample should have some waveform - inside the "wav" attribute. It should be either: - 1. A real waveform - 2. A null waveform due to the sample having no similar waveforms (nullified by the dataset) - 3. A null waveform due to it being dropped in a dropout module (nullified by dropout) - - Args: - samples (tp.List[ConditioningAttributes]): List of ConditioningAttributes samples. - Returns: - dict: A dicionary mapping an attribute name to wavs. - """ - wavs = defaultdict(list) - lens = defaultdict(list) - paths = defaultdict(list) - out = {} - - for sample in samples: - for attribute in self.wav_conditions: - wav, length, path = sample.wav[attribute] - wavs[attribute].append(wav.flatten()) - lens[attribute].append(length) - paths[attribute].append(path) - - # stack all wavs to a single tensor - for attribute in self.wav_conditions: - stacked_wav, _ = collate(wavs[attribute], dim=0) - out[attribute] = WavCondition(stacked_wav.unsqueeze(1), - torch.cat(lens['self_wav']), paths[attribute]) # type: ignore - - return out - - -class ConditionFuser(StreamingModule): - """Condition fuser handles the logic to combine the different conditions - to the actual model input. - - Args: - fuse2cond (tp.Dict[str, str]): A dictionary that says how to fuse - each condition. For example: - { - "prepend": ["description"], - "sum": ["genre", "bpm"], - "cross": ["description"], - } - cross_attention_pos_emb (bool, optional): Use positional embeddings in cross attention. - cross_attention_pos_emb_scale (int): Scale for positional embeddings in cross attention if used. - """ - FUSING_METHODS = ["sum", "prepend", "cross", "input_interpolate"] - - def __init__(self, fuse2cond: tp.Dict[str, tp.List[str]], cross_attention_pos_emb: bool = False, - cross_attention_pos_emb_scale: float = 1.0): - super().__init__() - assert all( - [k in self.FUSING_METHODS for k in fuse2cond.keys()] - ), f"got invalid fuse method, allowed methods: {self.FUSING_MEHTODS}" - self.cross_attention_pos_emb = cross_attention_pos_emb - self.cross_attention_pos_emb_scale = cross_attention_pos_emb_scale - self.fuse2cond: tp.Dict[str, tp.List[str]] = fuse2cond - self.cond2fuse: tp.Dict[str, str] = {} - for fuse_method, conditions in fuse2cond.items(): - for condition in conditions: - self.cond2fuse[condition] = fuse_method - - def forward( - self, - input: Tensor, - conditions: tp.Dict[str, ConditionType] - ) -> tp.Tuple[Tensor, tp.Optional[Tensor]]: - """Fuse the conditions to the provided model input. - - Args: - input (Tensor): Transformer input. - conditions (tp.Dict[str, ConditionType]): Dict of conditions. - Returns: - tp.Tuple[Tensor, Tensor]: The first tensor is the transformer input - after the conditions have been fused. The second output tensor is the tensor - used for cross-attention or None if no cross attention inputs exist. - """ - B, T, _ = input.shape - - if 'offsets' in self._streaming_state: - first_step = False - offsets = self._streaming_state['offsets'] - else: - first_step = True - offsets = torch.zeros(input.shape[0], dtype=torch.long, device=input.device) - - assert set(conditions.keys()).issubset(set(self.cond2fuse.keys())), \ - f"given conditions contain unknown attributes for fuser, " \ - f"expected {self.cond2fuse.keys()}, got {conditions.keys()}" - cross_attention_output = None - for cond_type, (cond, cond_mask) in conditions.items(): - op = self.cond2fuse[cond_type] - if op == "sum": - input += cond - elif op == "input_interpolate": - cond = rearrange(cond, "b t d -> b d t") - cond = F.interpolate(cond, size=input.shape[1]) - input += rearrange(cond, "b d t -> b t d") - elif op == "prepend": - if first_step: - input = torch.cat([cond, input], dim=1) - elif op == "cross": - if cross_attention_output is not None: - cross_attention_output = torch.cat([cross_attention_output, cond], dim=1) - else: - cross_attention_output = cond - else: - raise ValueError(f"unknown op ({op})") - - if self.cross_attention_pos_emb and cross_attention_output is not None: - positions = torch.arange( - cross_attention_output.shape[1], - device=cross_attention_output.device - ).view(1, -1, 1) - pos_emb = create_sin_embedding(positions, cross_attention_output.shape[-1]) - cross_attention_output = cross_attention_output + self.cross_attention_pos_emb_scale * pos_emb - - if self._is_streaming: - self._streaming_state['offsets'] = offsets + T - - return input, cross_attention_output diff --git a/spaces/GurudattaBS/GenDiseasePrediction/code/helper.py b/spaces/GurudattaBS/GenDiseasePrediction/code/helper.py deleted file mode 100644 index a187ec9b60a435be65018508e17ac45cf3f90709..0000000000000000000000000000000000000000 --- a/spaces/GurudattaBS/GenDiseasePrediction/code/helper.py +++ /dev/null @@ -1,33 +0,0 @@ -import pandas as pd -import numpy as np - -# def preprocess_kaggle(dataset_path): - -# # import the dataset -# dataset_df = pd.read_csv(dataset_path) - -# # Preprocess -# dataset_df = dataset_df.apply(lambda col: col.str.strip()) - -# test = pd.get_dummies(dataset_df.filter(regex='Symptom'), prefix='', prefix_sep='') -# test = test.groupby(test.columns, axis=1).agg(np.max) -# clean_df = pd.merge(test,dataset_df['Disease'], left_index=True, right_index=True) - -# return clean_df - -def prepare_symptoms_array(symptoms): - ''' - Convert a list of symptoms to a ndim(X) (in this case 131) that matches the - dataframe used to train the machine learning model - - Output: - - X (np.array) = X values ready as input to ML model to get prediction - ''' - symptoms_array = np.zeros((1,133)) - df = pd.read_csv('data/clean_dataset.tsv', sep='\t') - - for symptom in symptoms: - symptom_idx = df.columns.get_loc(symptom) - symptoms_array[0, symptom_idx] = 1 - - return symptoms_array \ No newline at end of file diff --git a/spaces/HaloMaster/chinesesummary/fengshen/models/zen1/tokenization.py b/spaces/HaloMaster/chinesesummary/fengshen/models/zen1/tokenization.py deleted file mode 100644 index bbc94e2417ff42ffcfb18284b8cb396415e630b1..0000000000000000000000000000000000000000 --- a/spaces/HaloMaster/chinesesummary/fengshen/models/zen1/tokenization.py +++ /dev/null @@ -1,438 +0,0 @@ -# coding=utf-8 -# This file is derived from the code at -# https://github.com/huggingface/transformers/blob/master/transformers/tokenization_bert.py -# -# Original copyright notice: -# -# Copyright 2018 The Google AI Language Team Authors and The HuggingFace Inc. team. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -"""Tokenization classes.""" - -from __future__ import absolute_import, division, print_function, unicode_literals - -import collections -import logging -import os -import unicodedata -from io import open - -from transformers import cached_path - -logger = logging.getLogger(__name__) - -PRETRAINED_VOCAB_ARCHIVE_MAP = { - 'bert-base-uncased': "https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased-vocab.txt", - 'bert-large-uncased': "https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-uncased-vocab.txt", - 'bert-base-cased': "https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-cased-vocab.txt", - 'bert-large-cased': "https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-cased-vocab.txt", - 'bert-base-multilingual-uncased': "https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-multilingual-uncased-vocab.txt", - 'bert-base-multilingual-cased': "https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-multilingual-cased-vocab.txt", - 'bert-base-chinese': "https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-chinese-vocab.txt", - 'bert-base-german-cased': "https://int-deepset-models-bert.s3.eu-central-1.amazonaws.com/pytorch/bert-base-german-cased-vocab.txt", - 'bert-large-uncased-whole-word-masking': "https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-uncased-whole-word-masking-vocab.txt", - 'bert-large-cased-whole-word-masking': "https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-cased-whole-word-masking-vocab.txt", - 'bert-large-uncased-whole-word-masking-finetuned-squad': "https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-uncased-whole-word-masking-finetuned-squad-vocab.txt", - 'bert-large-cased-whole-word-masking-finetuned-squad': "https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-cased-whole-word-masking-finetuned-squad-vocab.txt", - 'bert-base-cased-finetuned-mrpc': "https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-cased-finetuned-mrpc-vocab.txt", - 'IDEA-CCNL/Erlangshen-ZEN1-224M-Chinese': 'https://huggingface.co/IDEA-CCNL/Erlangshen-ZEN1-224M-Chinese/resolve/main/vocab.txt', -} -PRETRAINED_VOCAB_POSITIONAL_EMBEDDINGS_SIZE_MAP = { - 'bert-base-uncased': 512, - 'bert-large-uncased': 512, - 'bert-base-cased': 512, - 'bert-large-cased': 512, - 'bert-base-multilingual-uncased': 512, - 'bert-base-multilingual-cased': 512, - 'bert-base-chinese': 512, - 'bert-base-german-cased': 512, - 'bert-large-uncased-whole-word-masking': 512, - 'bert-large-cased-whole-word-masking': 512, - 'bert-large-uncased-whole-word-masking-finetuned-squad': 512, - 'bert-large-cased-whole-word-masking-finetuned-squad': 512, - 'bert-base-cased-finetuned-mrpc': 512, -} -VOCAB_NAME = 'vocab.txt' - - -def load_vocab(vocab_file): - """Loads a vocabulary file into a dictionary.""" - vocab = collections.OrderedDict() - index = 0 - with open(vocab_file, "r", encoding="utf-8") as reader: - while True: - token = reader.readline() - if not token: - break - token = token.strip() - vocab[token] = index - index += 1 - return vocab - - -def whitespace_tokenize(text): - """Runs basic whitespace cleaning and splitting on a piece of text.""" - text = text.strip() - if not text: - return [] - tokens = text.split() - return tokens - - -class BertTokenizer(object): - """Runs end-to-end tokenization: punctuation splitting + wordpiece""" - - def __init__(self, vocab_file, do_lower_case=True, max_len=None, do_basic_tokenize=True, - never_split=("[UNK]", "[SEP]", "[PAD]", "[CLS]", "[MASK]")): - """Constructs a BertTokenizer. - - Args: - vocab_file: Path to a one-wordpiece-per-line vocabulary file - do_lower_case: Whether to lower case the input - Only has an effect when do_wordpiece_only=False - do_basic_tokenize: Whether to do basic tokenization before wordpiece. - max_len: An artificial maximum length to truncate tokenized sequences to; - Effective maximum length is always the minimum of this - value (if specified) and the underlying BERT model's - sequence length. - never_split: List of tokens which will never be split during tokenization. - Only has an effect when do_wordpiece_only=False - """ - if not os.path.isfile(vocab_file): - raise ValueError( - "Can't find a vocabulary file at path '{}'. To load the vocabulary from a Google pretrained " - "model use `tokenizer = BertTokenizer.from_pretrained(PRETRAINED_MODEL_NAME)`".format(vocab_file)) - self.vocab = load_vocab(vocab_file) - self.ids_to_tokens = collections.OrderedDict( - [(ids, tok) for tok, ids in self.vocab.items()]) - self.do_basic_tokenize = do_basic_tokenize - if do_basic_tokenize: - self.basic_tokenizer = BasicTokenizer(do_lower_case=do_lower_case, - never_split=never_split) - self.wordpiece_tokenizer = WordpieceTokenizer(vocab=self.vocab) - self.max_len = max_len if max_len is not None else int(1e12) - - def tokenize(self, text): - split_tokens = [] - if self.do_basic_tokenize: - for token in self.basic_tokenizer.tokenize(text): - for sub_token in self.wordpiece_tokenizer.tokenize(token): - split_tokens.append(sub_token) - else: - split_tokens = self.wordpiece_tokenizer.tokenize(text) - return split_tokens - - def convert_tokens_to_ids(self, tokens): - """Converts a sequence of tokens into ids using the vocab.""" - ids = [] - for token in tokens: - ids.append(self.vocab[token]) - if len(ids) > self.max_len: - logger.warning( - "Token indices sequence length is longer than the specified maximum " - " sequence length for this BERT model ({} > {}). Running this" - " sequence through BERT will result in indexing errors".format(len(ids), self.max_len) - ) - return ids - - def convert_ids_to_tokens(self, ids): - """Converts a sequence of ids in wordpiece tokens using the vocab.""" - tokens = [] - for i in ids: - tokens.append(self.ids_to_tokens[i]) - return tokens - - def save_vocabulary(self, vocab_path): - """Save the tokenizer vocabulary to a directory or file.""" - index = 0 - if os.path.isdir(vocab_path): - vocab_file = os.path.join(vocab_path, VOCAB_NAME) - with open(vocab_file, "w", encoding="utf-8") as writer: - for token, token_index in sorted(self.vocab.items(), key=lambda kv: kv[1]): - if index != token_index: - logger.warning("Saving vocabulary to {}: vocabulary indices are not consecutive." - " Please check that the vocabulary is not corrupted!".format(vocab_file)) - index = token_index - writer.write(token + u'\n') - index += 1 - return vocab_file - - @classmethod - def from_pretrained(cls, pretrained_model_name_or_path, cache_dir=None, *inputs, **kwargs): - """ - Instantiate a PreTrainedBertModel from a pre-trained model file. - Download and cache the pre-trained model file if needed. - """ - if pretrained_model_name_or_path in PRETRAINED_VOCAB_ARCHIVE_MAP: - vocab_file = PRETRAINED_VOCAB_ARCHIVE_MAP[pretrained_model_name_or_path] - if '-cased' in pretrained_model_name_or_path and kwargs.get('do_lower_case', True): - logger.warning("The pre-trained model you are loading is a cased model but you have not set " - "`do_lower_case` to False. We are setting `do_lower_case=False` for you but " - "you may want to check this behavior.") - kwargs['do_lower_case'] = False - elif '-cased' not in pretrained_model_name_or_path and not kwargs.get('do_lower_case', True): - logger.warning("The pre-trained model you are loading is an uncased model but you have set " - "`do_lower_case` to False. We are setting `do_lower_case=True` for you " - "but you may want to check this behavior.") - kwargs['do_lower_case'] = True - else: - vocab_file = pretrained_model_name_or_path - if os.path.isdir(vocab_file): - vocab_file = os.path.join(vocab_file, VOCAB_NAME) - # redirect to the cache, if necessary - try: - resolved_vocab_file = cached_path(vocab_file, cache_dir=cache_dir) - except EnvironmentError: - if pretrained_model_name_or_path in PRETRAINED_VOCAB_ARCHIVE_MAP: - logger.error( - "Couldn't reach server at '{}' to download vocabulary.".format( - vocab_file)) - else: - logger.error( - "Model name '{}' was not found in model name list ({}). " - "We assumed '{}' was a path or url but couldn't find any file " - "associated to this path or url.".format( - pretrained_model_name_or_path, - ', '.join(PRETRAINED_VOCAB_ARCHIVE_MAP.keys()), - vocab_file)) - return None - if resolved_vocab_file == vocab_file: - logger.info("loading vocabulary file {}".format(vocab_file)) - else: - logger.info("loading vocabulary file {} from cache at {}".format( - vocab_file, resolved_vocab_file)) - if pretrained_model_name_or_path in PRETRAINED_VOCAB_POSITIONAL_EMBEDDINGS_SIZE_MAP: - # if we're using a pretrained model, ensure the tokenizer wont index sequences longer - # than the number of positional embeddings - max_len = PRETRAINED_VOCAB_POSITIONAL_EMBEDDINGS_SIZE_MAP[pretrained_model_name_or_path] - kwargs['max_len'] = min(kwargs.get('max_len', int(1e12)), max_len) - # Instantiate tokenizer. - tokenizer = cls(resolved_vocab_file, *inputs, **kwargs) - return tokenizer - - -class BasicTokenizer(object): - """Runs basic tokenization (punctuation splitting, lower casing, etc.).""" - - def __init__(self, - do_lower_case=True, - never_split=("[UNK]", "[SEP]", "[PAD]", "[CLS]", "[MASK]")): - """Constructs a BasicTokenizer. - - Args: - do_lower_case: Whether to lower case the input. - """ - self.do_lower_case = do_lower_case - self.never_split = never_split - - def tokenize(self, text): - """Tokenizes a piece of text.""" - text = self._clean_text(text) - # This was added on November 1st, 2018 for the multilingual and Chinese - # models. This is also applied to the English models now, but it doesn't - # matter since the English models were not trained on any Chinese data - # and generally don't have any Chinese data in them (there are Chinese - # characters in the vocabulary because Wikipedia does have some Chinese - # words in the English Wikipedia.). - text = self._tokenize_chinese_chars(text) - orig_tokens = whitespace_tokenize(text) - split_tokens = [] - for token in orig_tokens: - if self.do_lower_case and token not in self.never_split: - token = token.lower() - token = self._run_strip_accents(token) - split_tokens.extend(self._run_split_on_punc(token)) - - output_tokens = whitespace_tokenize(" ".join(split_tokens)) - return output_tokens - - def _run_strip_accents(self, text): - """Strips accents from a piece of text.""" - text = unicodedata.normalize("NFD", text) - output = [] - for char in text: - cat = unicodedata.category(char) - if cat == "Mn": - continue - output.append(char) - return "".join(output) - - def _run_split_on_punc(self, text): - """Splits punctuation on a piece of text.""" - if text in self.never_split: - return [text] - chars = list(text) - i = 0 - start_new_word = True - output = [] - while i < len(chars): - char = chars[i] - if _is_punctuation(char): - output.append([char]) - start_new_word = True - else: - if start_new_word: - output.append([]) - start_new_word = False - output[-1].append(char) - i += 1 - - return ["".join(x) for x in output] - - def _tokenize_chinese_chars(self, text): - """Adds whitespace around any CJK character.""" - output = [] - for char in text: - cp = ord(char) - if self._is_chinese_char(cp): - output.append(" ") - output.append(char) - output.append(" ") - else: - output.append(char) - return "".join(output) - - def _is_chinese_char(self, cp): - """Checks whether CP is the codepoint of a CJK character.""" - # This defines a "chinese character" as anything in the CJK Unicode block: - # https://en.wikipedia.org/wiki/CJK_Unified_Ideographs_(Unicode_block) - # - # Note that the CJK Unicode block is NOT all Japanese and Korean characters, - # despite its name. The modern Korean Hangul alphabet is a different block, - # as is Japanese Hiragana and Katakana. Those alphabets are used to write - # space-separated words, so they are not treated specially and handled - # like the all of the other languages. - if ((cp >= 0x4E00 and cp <= 0x9FFF) or # - (cp >= 0x3400 and cp <= 0x4DBF) or # - (cp >= 0x20000 and cp <= 0x2A6DF) or # - (cp >= 0x2A700 and cp <= 0x2B73F) or # - (cp >= 0x2B740 and cp <= 0x2B81F) or # - (cp >= 0x2B820 and cp <= 0x2CEAF) or - (cp >= 0xF900 and cp <= 0xFAFF) or # - (cp >= 0x2F800 and cp <= 0x2FA1F)): # - return True - - return False - - def _clean_text(self, text): - """Performs invalid character removal and whitespace cleanup on text.""" - output = [] - for char in text: - cp = ord(char) - if cp == 0 or cp == 0xfffd or _is_control(char): - continue - if _is_whitespace(char): - output.append(" ") - else: - output.append(char) - return "".join(output) - - -class WordpieceTokenizer(object): - """Runs WordPiece tokenization.""" - - def __init__(self, vocab, unk_token="[UNK]", max_input_chars_per_word=100): - self.vocab = vocab - self.unk_token = unk_token - self.max_input_chars_per_word = max_input_chars_per_word - - def tokenize(self, text): - """Tokenizes a piece of text into its word pieces. - - This uses a greedy longest-match-first algorithm to perform tokenization - using the given vocabulary. - - For example: - input = "unaffable" - output = ["un", "##aff", "##able"] - - Args: - text: A single token or whitespace separated tokens. This should have - already been passed through `BasicTokenizer`. - - Returns: - A list of wordpiece tokens. - """ - - output_tokens = [] - for token in whitespace_tokenize(text): - chars = list(token) - if len(chars) > self.max_input_chars_per_word: - output_tokens.append(self.unk_token) - continue - - is_bad = False - start = 0 - sub_tokens = [] - while start < len(chars): - end = len(chars) - cur_substr = None - while start < end: - substr = "".join(chars[start:end]) - if start > 0: - substr = "##" + substr - if substr in self.vocab: - cur_substr = substr - break - end -= 1 - if cur_substr is None: - is_bad = True - break - sub_tokens.append(cur_substr) - start = end - - if is_bad: - output_tokens.append(self.unk_token) - else: - output_tokens.extend(sub_tokens) - return output_tokens - - -def _is_whitespace(char): - """Checks whether `chars` is a whitespace character.""" - # \t, \n, and \r are technically contorl characters but we treat them - # as whitespace since they are generally considered as such. - if char == " " or char == "\t" or char == "\n" or char == "\r": - return True - cat = unicodedata.category(char) - if cat == "Zs": - return True - return False - - -def _is_control(char): - """Checks whether `chars` is a control character.""" - # These are technically control characters but we count them as whitespace - # characters. - if char == "\t" or char == "\n" or char == "\r": - return False - cat = unicodedata.category(char) - if cat.startswith("C"): - return True - return False - - -def _is_punctuation(char): - """Checks whether `chars` is a punctuation character.""" - cp = ord(char) - # We treat all non-letter/number ASCII as punctuation. - # Characters such as "^", "$", and "`" are not in the Unicode - # Punctuation class but we treat them as punctuation anyways, for - # consistency. - if ((cp >= 33 and cp <= 47) or (cp >= 58 and cp <= 64) or - (cp >= 91 and cp <= 96) or (cp >= 123 and cp <= 126)): - return True - cat = unicodedata.category(char) - if cat.startswith("P"): - return True - return False diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/textless_nlp/gslm/unit2speech/tacotron2/cleaners.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/textless_nlp/gslm/unit2speech/tacotron2/cleaners.py deleted file mode 100644 index e2e35c1a8cc4c628c5d05802677142c9a2122d2b..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/textless_nlp/gslm/unit2speech/tacotron2/cleaners.py +++ /dev/null @@ -1,90 +0,0 @@ -""" from https://github.com/keithito/tacotron """ - -''' -Cleaners are transformations that run over the input text at both training and eval time. - -Cleaners can be selected by passing a comma-delimited list of cleaner names as the "cleaners" -hyperparameter. Some cleaners are English-specific. You'll typically want to use: - 1. "english_cleaners" for English text - 2. "transliteration_cleaners" for non-English text that can be transliterated to ASCII using - the Unidecode library (https://pypi.python.org/pypi/Unidecode) - 3. "basic_cleaners" if you do not want to transliterate (in this case, you should also update - the symbols in symbols.py to match your data). -''' - -import re -from unidecode import unidecode -from .numbers import normalize_numbers - - -# Regular expression matching whitespace: -_whitespace_re = re.compile(r'\s+') - -# List of (regular expression, replacement) pairs for abbreviations: -_abbreviations = [(re.compile('\\b%s\\.' % x[0], re.IGNORECASE), x[1]) for x in [ - ('mrs', 'misess'), - ('mr', 'mister'), - ('dr', 'doctor'), - ('st', 'saint'), - ('co', 'company'), - ('jr', 'junior'), - ('maj', 'major'), - ('gen', 'general'), - ('drs', 'doctors'), - ('rev', 'reverend'), - ('lt', 'lieutenant'), - ('hon', 'honorable'), - ('sgt', 'sergeant'), - ('capt', 'captain'), - ('esq', 'esquire'), - ('ltd', 'limited'), - ('col', 'colonel'), - ('ft', 'fort'), -]] - - -def expand_abbreviations(text): - for regex, replacement in _abbreviations: - text = re.sub(regex, replacement, text) - return text - - -def expand_numbers(text): - return normalize_numbers(text) - - -def lowercase(text): - return text.lower() - - -def collapse_whitespace(text): - return re.sub(_whitespace_re, ' ', text) - - -def convert_to_ascii(text): - return unidecode(text) - - -def basic_cleaners(text): - '''Basic pipeline that lowercases and collapses whitespace without transliteration.''' - text = lowercase(text) - text = collapse_whitespace(text) - return text - - -def transliteration_cleaners(text): - '''Pipeline for non-English text that transliterates to ASCII.''' - text = convert_to_ascii(text) - text = lowercase(text) - text = collapse_whitespace(text) - return text - - -def english_cleaners(text): - '''Pipeline for English text, including number and abbreviation expansion.''' - text = convert_to_ascii(text) - text = lowercase(text) - text = expand_numbers(text) - text = expand_abbreviations(text) - text = collapse_whitespace(text) - return text diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/models/roberta/alignment_utils.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/models/roberta/alignment_utils.py deleted file mode 100644 index ccc7f74cb94d5b8baa2d4e9dfd44f653d47ee43e..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/models/roberta/alignment_utils.py +++ /dev/null @@ -1,118 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from collections import Counter -from typing import List - -import torch - - -def align_bpe_to_words(roberta, bpe_tokens: torch.LongTensor, other_tokens: List[str]): - """ - Helper to align GPT-2 BPE to other tokenization formats (e.g., spaCy). - - Args: - roberta (RobertaHubInterface): RoBERTa instance - bpe_tokens (torch.LongTensor): GPT-2 BPE tokens of shape `(T_bpe)` - other_tokens (List[str]): other tokens of shape `(T_words)` - - Returns: - List[str]: mapping from *other_tokens* to corresponding *bpe_tokens*. - """ - assert bpe_tokens.dim() == 1 - assert bpe_tokens[0] == 0 - - def clean(text): - return text.strip() - - # remove whitespaces to simplify alignment - bpe_tokens = [roberta.task.source_dictionary.string([x]) for x in bpe_tokens] - bpe_tokens = [ - clean(roberta.bpe.decode(x) if x not in {"", ""} else x) for x in bpe_tokens - ] - other_tokens = [clean(str(o)) for o in other_tokens] - - # strip leading - bpe_tokens = bpe_tokens[1:] - assert "".join(bpe_tokens) == "".join(other_tokens) - - # create alignment from every word to a list of BPE tokens - alignment = [] - bpe_toks = filter(lambda item: item[1] != "", enumerate(bpe_tokens, start=1)) - j, bpe_tok = next(bpe_toks) - for other_tok in other_tokens: - bpe_indices = [] - while True: - if other_tok.startswith(bpe_tok): - bpe_indices.append(j) - other_tok = other_tok[len(bpe_tok) :] - try: - j, bpe_tok = next(bpe_toks) - except StopIteration: - j, bpe_tok = None, None - elif bpe_tok.startswith(other_tok): - # other_tok spans multiple BPE tokens - bpe_indices.append(j) - bpe_tok = bpe_tok[len(other_tok) :] - other_tok = "" - else: - raise Exception('Cannot align "{}" and "{}"'.format(other_tok, bpe_tok)) - if other_tok == "": - break - assert len(bpe_indices) > 0 - alignment.append(bpe_indices) - assert len(alignment) == len(other_tokens) - - return alignment - - -def align_features_to_words(roberta, features, alignment): - """ - Align given features to words. - - Args: - roberta (RobertaHubInterface): RoBERTa instance - features (torch.Tensor): features to align of shape `(T_bpe x C)` - alignment: alignment between BPE tokens and words returned by - func:`align_bpe_to_words`. - """ - assert features.dim() == 2 - - bpe_counts = Counter(j for bpe_indices in alignment for j in bpe_indices) - assert bpe_counts[0] == 0 # shouldn't be aligned - denom = features.new([bpe_counts.get(j, 1) for j in range(len(features))]) - weighted_features = features / denom.unsqueeze(-1) - - output = [weighted_features[0]] - largest_j = -1 - for bpe_indices in alignment: - output.append(weighted_features[bpe_indices].sum(dim=0)) - largest_j = max(largest_j, *bpe_indices) - for j in range(largest_j + 1, len(features)): - output.append(weighted_features[j]) - output = torch.stack(output) - assert torch.all(torch.abs(output.sum(dim=0) - features.sum(dim=0)) < 1e-4) - return output - - -def spacy_nlp(): - if getattr(spacy_nlp, "_nlp", None) is None: - try: - from spacy.lang.en import English - - spacy_nlp._nlp = English() - except ImportError: - raise ImportError("Please install spacy with: pip install spacy") - return spacy_nlp._nlp - - -def spacy_tokenizer(): - if getattr(spacy_tokenizer, "_tokenizer", None) is None: - try: - nlp = spacy_nlp() - spacy_tokenizer._tokenizer = nlp.Defaults.create_tokenizer(nlp) - except ImportError: - raise ImportError("Please install spacy with: pip install spacy") - return spacy_tokenizer._tokenizer diff --git a/spaces/Harveenchadha/Hindi_TTS/vakyansh_tts/utils/duration.py b/spaces/Harveenchadha/Hindi_TTS/vakyansh_tts/utils/duration.py deleted file mode 100644 index c3b5e112b72dd5a07ea2463f604d98bb8d961496..0000000000000000000000000000000000000000 --- a/spaces/Harveenchadha/Hindi_TTS/vakyansh_tts/utils/duration.py +++ /dev/null @@ -1,33 +0,0 @@ -# Usage -> python duration.py /src/folder/path - -import soundfile as sf -import sys -import os -from glob import glob -from joblib import Parallel, delayed -from tqdm import tqdm - - -def get_duration(fpath): - w = sf.SoundFile(fpath) - sr = w.samplerate - assert 22050 == sr, "Sample rate is not 22050" - return len(w) / sr - - -def main(folder, ext="wav"): - file_list = glob(folder + "/**/*." + ext, recursive=True) - print(f"\n\tTotal number of wav files {len(file_list)}") - duration_list = Parallel(n_jobs=1)( - delayed(get_duration)(i) for i in tqdm(file_list) - ) - print( - f"\n\tMin Duration {min(duration_list):.2f} Max Duration {max(duration_list):.2f} in secs" - ) - print(f"\n\tTotal Duration {sum(duration_list)/3600:.2f} in hours") - - -if __name__ == "__main__": - folder = sys.argv[1] - folder = os.path.abspath(folder) - main(folder) diff --git a/spaces/Harveenchadha/Vakyansh-Hindi-TTS/ttsv/src/glow_tts/init.py b/spaces/Harveenchadha/Vakyansh-Hindi-TTS/ttsv/src/glow_tts/init.py deleted file mode 100644 index 39dd83dbd55475d562a3f54d951cb822800d2e0f..0000000000000000000000000000000000000000 --- a/spaces/Harveenchadha/Vakyansh-Hindi-TTS/ttsv/src/glow_tts/init.py +++ /dev/null @@ -1,79 +0,0 @@ -import os -import json -import argparse -import math -import torch -from torch import nn, optim -from torch.nn import functional as F -from torch.utils.data import DataLoader - -from data_utils import TextMelLoader, TextMelCollate -import models -import commons -import utils - - -class FlowGenerator_DDI(models.FlowGenerator): - """A helper for Data-dependent Initialization""" - - def __init__(self, *args, **kwargs): - super().__init__(*args, **kwargs) - for f in self.decoder.flows: - if getattr(f, "set_ddi", False): - f.set_ddi(True) - - -def main(): - hps = utils.get_hparams() - logger = utils.get_logger(hps.log_dir) - logger.info(hps) - utils.check_git_hash(hps.log_dir) - - torch.manual_seed(hps.train.seed) - - train_dataset = TextMelLoader(hps.data.training_files, hps.data) - collate_fn = TextMelCollate(1) - train_loader = DataLoader( - train_dataset, - num_workers=8, - shuffle=True, - batch_size=hps.train.batch_size, - pin_memory=True, - drop_last=True, - collate_fn=collate_fn, - ) - symbols = hps.data.punc + hps.data.chars - generator = FlowGenerator_DDI( - len(symbols) + getattr(hps.data, "add_blank", False), - out_channels=hps.data.n_mel_channels, - **hps.model - ).cuda() - optimizer_g = commons.Adam( - generator.parameters(), - scheduler=hps.train.scheduler, - dim_model=hps.model.hidden_channels, - warmup_steps=hps.train.warmup_steps, - lr=hps.train.learning_rate, - betas=hps.train.betas, - eps=hps.train.eps, - ) - - generator.train() - for batch_idx, (x, x_lengths, y, y_lengths) in enumerate(train_loader): - x, x_lengths = x.cuda(), x_lengths.cuda() - y, y_lengths = y.cuda(), y_lengths.cuda() - - _ = generator(x, x_lengths, y, y_lengths, gen=False) - break - - utils.save_checkpoint( - generator, - optimizer_g, - hps.train.learning_rate, - 0, - os.path.join(hps.model_dir, "ddi_G.pth"), - ) - - -if __name__ == "__main__": - main() diff --git a/spaces/Harveenchadha/en_to_indic_translation/model_configs/custom_transformer.py b/spaces/Harveenchadha/en_to_indic_translation/model_configs/custom_transformer.py deleted file mode 100644 index b122e1bf5c81534aae35bb6235c1feaf45b7bada..0000000000000000000000000000000000000000 --- a/spaces/Harveenchadha/en_to_indic_translation/model_configs/custom_transformer.py +++ /dev/null @@ -1,38 +0,0 @@ -from fairseq.models import register_model_architecture -from fairseq.models.transformer import base_architecture - - -@register_model_architecture("transformer", "transformer_2x") -def transformer_big(args): - args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 1024) - args.encoder_ffn_embed_dim = getattr(args, "encoder_ffn_embed_dim", 4096) - args.encoder_attention_heads = getattr(args, "encoder_attention_heads", 16) - args.encoder_normalize_before = getattr(args, "encoder_normalize_before", False) - args.decoder_embed_dim = getattr(args, "decoder_embed_dim", 1024) - args.decoder_ffn_embed_dim = getattr(args, "decoder_ffn_embed_dim", 4096) - args.decoder_attention_heads = getattr(args, "decoder_attention_heads", 16) - base_architecture(args) - - -@register_model_architecture("transformer", "transformer_4x") -def transformer_huge(args): - args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 1536) - args.encoder_ffn_embed_dim = getattr(args, "encoder_ffn_embed_dim", 4096) - args.encoder_attention_heads = getattr(args, "encoder_attention_heads", 16) - args.encoder_normalize_before = getattr(args, "encoder_normalize_before", False) - args.decoder_embed_dim = getattr(args, "decoder_embed_dim", 1536) - args.decoder_ffn_embed_dim = getattr(args, "decoder_ffn_embed_dim", 4096) - args.decoder_attention_heads = getattr(args, "decoder_attention_heads", 16) - base_architecture(args) - - -@register_model_architecture("transformer", "transformer_9x") -def transformer_xlarge(args): - args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 2048) - args.encoder_ffn_embed_dim = getattr(args, "encoder_ffn_embed_dim", 8192) - args.encoder_attention_heads = getattr(args, "encoder_attention_heads", 16) - args.encoder_normalize_before = getattr(args, "encoder_normalize_before", False) - args.decoder_embed_dim = getattr(args, "decoder_embed_dim", 2048) - args.decoder_ffn_embed_dim = getattr(args, "decoder_ffn_embed_dim", 8192) - args.decoder_attention_heads = getattr(args, "decoder_attention_heads", 16) - base_architecture(args) diff --git a/spaces/HighCWu/anime-colorization-with-hint/gradio-modified/gradio/examples.py b/spaces/HighCWu/anime-colorization-with-hint/gradio-modified/gradio/examples.py deleted file mode 100644 index a40ae25e903eebb8913276739200c2b02372e839..0000000000000000000000000000000000000000 --- a/spaces/HighCWu/anime-colorization-with-hint/gradio-modified/gradio/examples.py +++ /dev/null @@ -1,327 +0,0 @@ -""" -Defines helper methods useful for loading and caching Interface examples. -""" -from __future__ import annotations - -import ast -import csv -import os -import warnings -from pathlib import Path -from typing import TYPE_CHECKING, Any, Callable, List - -from gradio import utils -from gradio.components import Dataset -from gradio.context import Context -from gradio.documentation import document, set_documentation_group -from gradio.flagging import CSVLogger - -if TYPE_CHECKING: # Only import for type checking (to avoid circular imports). - from gradio.components import IOComponent - -CACHED_FOLDER = "gradio_cached_examples" -LOG_FILE = "log.csv" - -set_documentation_group("component-helpers") - - -def create_examples( - examples: List[Any] | List[List[Any]] | str, - inputs: IOComponent | List[IOComponent], - outputs: IOComponent | List[IOComponent] | None = None, - fn: Callable | None = None, - cache_examples: bool = False, - examples_per_page: int = 10, - _api_mode: bool = False, - label: str | None = None, - elem_id: str | None = None, - run_on_click: bool = False, - preprocess: bool = True, - postprocess: bool = True, - batch: bool = False, -): - """Top-level synchronous function that creates Examples. Provided for backwards compatibility, i.e. so that gr.Examples(...) can be used to create the Examples component.""" - examples_obj = Examples( - examples=examples, - inputs=inputs, - outputs=outputs, - fn=fn, - cache_examples=cache_examples, - examples_per_page=examples_per_page, - _api_mode=_api_mode, - label=label, - elem_id=elem_id, - run_on_click=run_on_click, - preprocess=preprocess, - postprocess=postprocess, - batch=batch, - _initiated_directly=False, - ) - utils.synchronize_async(examples_obj.create) - return examples_obj - - -@document() -class Examples: - """ - This class is a wrapper over the Dataset component and can be used to create Examples - for Blocks / Interfaces. Populates the Dataset component with examples and - assigns event listener so that clicking on an example populates the input/output - components. Optionally handles example caching for fast inference. - - Demos: blocks_inputs, fake_gan - Guides: more_on_examples_and_flagging, using_hugging_face_integrations, image_classification_in_pytorch, image_classification_in_tensorflow, image_classification_with_vision_transformers, create_your_own_friends_with_a_gan - """ - - def __init__( - self, - examples: List[Any] | List[List[Any]] | str, - inputs: IOComponent | List[IOComponent], - outputs: IOComponent | List[IOComponent] | None = None, - fn: Callable | None = None, - cache_examples: bool = False, - examples_per_page: int = 10, - _api_mode: bool = False, - label: str | None = "Examples", - elem_id: str | None = None, - run_on_click: bool = False, - preprocess: bool = True, - postprocess: bool = True, - batch: bool = False, - _initiated_directly: bool = True, - ): - """ - Parameters: - examples: example inputs that can be clicked to populate specific components. Should be nested list, in which the outer list consists of samples and each inner list consists of an input corresponding to each input component. A string path to a directory of examples can also be provided but it should be within the directory with the python file running the gradio app. If there are multiple input components and a directory is provided, a log.csv file must be present in the directory to link corresponding inputs. - inputs: the component or list of components corresponding to the examples - outputs: optionally, provide the component or list of components corresponding to the output of the examples. Required if `cache` is True. - fn: optionally, provide the function to run to generate the outputs corresponding to the examples. Required if `cache` is True. - cache_examples: if True, caches examples for fast runtime. If True, then `fn` and `outputs` need to be provided - examples_per_page: how many examples to show per page. - label: the label to use for the examples component (by default, "Examples") - elem_id: an optional string that is assigned as the id of this component in the HTML DOM. - run_on_click: if cache_examples is False, clicking on an example does not run the function when an example is clicked. Set this to True to run the function when an example is clicked. Has no effect if cache_examples is True. - preprocess: if True, preprocesses the example input before running the prediction function and caching the output. Only applies if cache_examples is True. - postprocess: if True, postprocesses the example output after running the prediction function and before caching. Only applies if cache_examples is True. - batch: If True, then the function should process a batch of inputs, meaning that it should accept a list of input values for each parameter. Used only if cache_examples is True. - """ - if _initiated_directly: - warnings.warn( - "Please use gr.Examples(...) instead of gr.examples.Examples(...) to create the Examples.", - ) - - if cache_examples and (fn is None or outputs is None): - raise ValueError("If caching examples, `fn` and `outputs` must be provided") - - if not isinstance(inputs, list): - inputs = [inputs] - if outputs and not isinstance(outputs, list): - outputs = [outputs] - - working_directory = Path().absolute() - - if examples is None: - raise ValueError("The parameter `examples` cannot be None") - elif isinstance(examples, list) and ( - len(examples) == 0 or isinstance(examples[0], list) - ): - pass - elif ( - isinstance(examples, list) and len(inputs) == 1 - ): # If there is only one input component, examples can be provided as a regular list instead of a list of lists - examples = [[e] for e in examples] - elif isinstance(examples, str): - if not Path(examples).exists(): - raise FileNotFoundError( - "Could not find examples directory: " + examples - ) - working_directory = examples - if not (Path(examples) / LOG_FILE).exists(): - if len(inputs) == 1: - examples = [[e] for e in os.listdir(examples)] - else: - raise FileNotFoundError( - "Could not find log file (required for multiple inputs): " - + LOG_FILE - ) - else: - with open(Path(examples) / LOG_FILE) as logs: - examples = list(csv.reader(logs)) - examples = [ - examples[i][: len(inputs)] for i in range(1, len(examples)) - ] # remove header and unnecessary columns - - else: - raise ValueError( - "The parameter `examples` must either be a string directory or a list" - "(if there is only 1 input component) or (more generally), a nested " - "list, where each sublist represents a set of inputs." - ) - - input_has_examples = [False] * len(inputs) - for example in examples: - for idx, example_for_input in enumerate(example): - if not (example_for_input is None): - try: - input_has_examples[idx] = True - except IndexError: - pass # If there are more example components than inputs, ignore. This can sometimes be intentional (e.g. loading from a log file where outputs and timestamps are also logged) - - inputs_with_examples = [ - inp for (inp, keep) in zip(inputs, input_has_examples) if keep - ] - non_none_examples = [ - [ex for (ex, keep) in zip(example, input_has_examples) if keep] - for example in examples - ] - - self.examples = examples - self.non_none_examples = non_none_examples - self.inputs = inputs - self.inputs_with_examples = inputs_with_examples - self.outputs = outputs - self.fn = fn - self.cache_examples = cache_examples - self._api_mode = _api_mode - self.preprocess = preprocess - self.postprocess = postprocess - self.batch = batch - - with utils.set_directory(working_directory): - self.processed_examples = [ - [ - component.postprocess(sample) - for component, sample in zip(inputs, example) - ] - for example in examples - ] - self.non_none_processed_examples = [ - [ex for (ex, keep) in zip(example, input_has_examples) if keep] - for example in self.processed_examples - ] - if cache_examples: - for example in self.examples: - if len([ex for ex in example if ex is not None]) != len(self.inputs): - warnings.warn( - "Examples are being cached but not all input components have " - "example values. This may result in an exception being thrown by " - "your function. If you do get an error while caching examples, make " - "sure all of your inputs have example values for all of your examples " - "or you provide default values for those particular parameters in your function." - ) - break - - with utils.set_directory(working_directory): - self.dataset = Dataset( - components=inputs_with_examples, - samples=non_none_examples, - type="index", - label=label, - samples_per_page=examples_per_page, - elem_id=elem_id, - ) - - self.cached_folder = Path(CACHED_FOLDER) / str(self.dataset._id) - self.cached_file = Path(self.cached_folder) / "log.csv" - self.cache_examples = cache_examples - self.run_on_click = run_on_click - - async def create(self) -> None: - """Caches the examples if self.cache_examples is True and creates the Dataset - component to hold the examples""" - - async def load_example(example_id): - if self.cache_examples: - processed_example = self.non_none_processed_examples[ - example_id - ] + await self.load_from_cache(example_id) - else: - processed_example = self.non_none_processed_examples[example_id] - return utils.resolve_singleton(processed_example) - - if Context.root_block: - if self.cache_examples and self.outputs: - targets = self.inputs_with_examples - else: - targets = self.inputs - self.dataset.click( - load_example, - inputs=[self.dataset], - outputs=targets, # type: ignore - postprocess=False, - queue=False, - ) - if self.run_on_click and not self.cache_examples: - if self.fn is None: - raise ValueError("Cannot run_on_click if no function is provided") - self.dataset.click( - self.fn, - inputs=self.inputs, # type: ignore - outputs=self.outputs, # type: ignore - ) - - if self.cache_examples: - await self.cache() - - async def cache(self) -> None: - """ - Caches all of the examples so that their predictions can be shown immediately. - """ - if Path(self.cached_file).exists(): - print( - f"Using cache from '{Path(self.cached_folder).resolve()}' directory. If method or examples have changed since last caching, delete this folder to clear cache." - ) - else: - if Context.root_block is None: - raise ValueError("Cannot cache examples if not in a Blocks context") - - print(f"Caching examples at: '{Path(self.cached_file).resolve()}'") - cache_logger = CSVLogger() - - # create a fake dependency to process the examples and get the predictions - dependency = Context.root_block.set_event_trigger( - event_name="fake_event", - fn=self.fn, - inputs=self.inputs_with_examples, # type: ignore - outputs=self.outputs, # type: ignore - preprocess=self.preprocess and not self._api_mode, - postprocess=self.postprocess and not self._api_mode, - batch=self.batch, - ) - - fn_index = Context.root_block.dependencies.index(dependency) - assert self.outputs is not None - cache_logger.setup(self.outputs, self.cached_folder) - for example_id, _ in enumerate(self.examples): - processed_input = self.processed_examples[example_id] - if self.batch: - processed_input = [[value] for value in processed_input] - prediction = await Context.root_block.process_api( - fn_index=fn_index, inputs=processed_input, request=None, state={} - ) - output = prediction["data"] - if self.batch: - output = [value[0] for value in output] - cache_logger.flag(output) - # Remove the "fake_event" to prevent bugs in loading interfaces from spaces - Context.root_block.dependencies.remove(dependency) - Context.root_block.fns.pop(fn_index) - - async def load_from_cache(self, example_id: int) -> List[Any]: - """Loads a particular cached example for the interface. - Parameters: - example_id: The id of the example to process (zero-indexed). - """ - with open(self.cached_file) as cache: - examples = list(csv.reader(cache)) - example = examples[example_id + 1] # +1 to adjust for header - output = [] - assert self.outputs is not None - for component, value in zip(self.outputs, example): - try: - value_as_dict = ast.literal_eval(value) - assert utils.is_update(value_as_dict) - output.append(value_as_dict) - except (ValueError, TypeError, SyntaxError, AssertionError): - output.append(component.serialize(value, self.cached_folder)) - return output diff --git a/spaces/HuangLab/CELL-E_2-Sequence_Prediction/taming/modules/util.py b/spaces/HuangLab/CELL-E_2-Sequence_Prediction/taming/modules/util.py deleted file mode 100644 index 9ee16385d8b1342a2d60a5f1aa5cadcfbe934bd8..0000000000000000000000000000000000000000 --- a/spaces/HuangLab/CELL-E_2-Sequence_Prediction/taming/modules/util.py +++ /dev/null @@ -1,130 +0,0 @@ -import torch -import torch.nn as nn - - -def count_params(model): - total_params = sum(p.numel() for p in model.parameters()) - return total_params - - -class ActNorm(nn.Module): - def __init__(self, num_features, logdet=False, affine=True, - allow_reverse_init=False): - assert affine - super().__init__() - self.logdet = logdet - self.loc = nn.Parameter(torch.zeros(1, num_features, 1, 1)) - self.scale = nn.Parameter(torch.ones(1, num_features, 1, 1)) - self.allow_reverse_init = allow_reverse_init - - self.register_buffer('initialized', torch.tensor(0, dtype=torch.uint8)) - - def initialize(self, input): - with torch.no_grad(): - flatten = input.permute(1, 0, 2, 3).contiguous().view(input.shape[1], -1) - mean = ( - flatten.mean(1) - .unsqueeze(1) - .unsqueeze(2) - .unsqueeze(3) - .permute(1, 0, 2, 3) - ) - std = ( - flatten.std(1) - .unsqueeze(1) - .unsqueeze(2) - .unsqueeze(3) - .permute(1, 0, 2, 3) - ) - - self.loc.data.copy_(-mean) - self.scale.data.copy_(1 / (std + 1e-6)) - - def forward(self, input, reverse=False): - if reverse: - return self.reverse(input) - if len(input.shape) == 2: - input = input[:,:,None,None] - squeeze = True - else: - squeeze = False - - _, _, height, width = input.shape - - if self.training and self.initialized.item() == 0: - self.initialize(input) - self.initialized.fill_(1) - - h = self.scale * (input + self.loc) - - if squeeze: - h = h.squeeze(-1).squeeze(-1) - - if self.logdet: - log_abs = torch.log(torch.abs(self.scale)) - logdet = height*width*torch.sum(log_abs) - logdet = logdet * torch.ones(input.shape[0]).to(input) - return h, logdet - - return h - - def reverse(self, output): - if self.training and self.initialized.item() == 0: - if not self.allow_reverse_init: - raise RuntimeError( - "Initializing ActNorm in reverse direction is " - "disabled by default. Use allow_reverse_init=True to enable." - ) - else: - self.initialize(output) - self.initialized.fill_(1) - - if len(output.shape) == 2: - output = output[:,:,None,None] - squeeze = True - else: - squeeze = False - - h = output / self.scale - self.loc - - if squeeze: - h = h.squeeze(-1).squeeze(-1) - return h - - -class AbstractEncoder(nn.Module): - def __init__(self): - super().__init__() - - def encode(self, *args, **kwargs): - raise NotImplementedError - - -class Labelator(AbstractEncoder): - """Net2Net Interface for Class-Conditional Model""" - def __init__(self, n_classes, quantize_interface=True): - super().__init__() - self.n_classes = n_classes - self.quantize_interface = quantize_interface - - def encode(self, c): - c = c[:,None] - if self.quantize_interface: - return c, None, [None, None, c.long()] - return c - - -class SOSProvider(AbstractEncoder): - # for unconditional training - def __init__(self, sos_token, quantize_interface=True): - super().__init__() - self.sos_token = sos_token - self.quantize_interface = quantize_interface - - def encode(self, x): - # get batch size from data and replicate sos_token - c = torch.ones(x.shape[0], 1)*self.sos_token - c = c.long().to(x.device) - if self.quantize_interface: - return c, None, [None, None, c] - return c diff --git a/spaces/IPN/streamlit_demo/README.md b/spaces/IPN/streamlit_demo/README.md deleted file mode 100644 index a20f4c531414109befd43812dd3fa6d06ef7cb40..0000000000000000000000000000000000000000 --- a/spaces/IPN/streamlit_demo/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Streamlit_demo -emoji: 👁 -colorFrom: indigo -colorTo: indigo -sdk: streamlit -sdk_version: 1.2.0 -app_file: app.py -pinned: false -license: cc ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/IVentureISB/Gen-AI/chatbot.py b/spaces/IVentureISB/Gen-AI/chatbot.py deleted file mode 100644 index 41246fe63531dec67c55308c8c0f6653bb2ecdaa..0000000000000000000000000000000000000000 --- a/spaces/IVentureISB/Gen-AI/chatbot.py +++ /dev/null @@ -1,24 +0,0 @@ -import gradio as gr - -messages = [ - {"role": "system", "content": "You are an AI assistant that only gives responses from the website https://i-venture.org/ and you help people make decisions about how to make a difference in others' lives. You also provide the relevant links from that website as part of your answers."}, -] - -def chatbot(input): - if input: - context = create_context(input, df) - message=f"Answer the question based on the context below, and if the question can't be answered based on the context, say \"I don't know\"\n\nContext: {context}\n\n---\n\nQuestion: {input}\nAnswer:", - messages.append({"role": "user", "content": message}) - chat = openai.ChatCompletion.create( - temperature=0.5, model="gpt-3.5-turbo", messages=messages, - ) - reply = chat.choices[0].message.content - messages.append({"role": "assistant", "content": reply}) - return reply - -inputs = gr.inputs.Textbox(lines=7, label="Chat with I-venture @ ISB AI powered bot") -outputs = gr.outputs.Textbox(label="Reply") - -gr.Interface(fn=chatbot, inputs=inputs, outputs=outputs, title="Talk with I-venture @ ISB", - description="Anything you want to find out about entreprenuership at ISB. Sample questions include >>> how to get incubated at ISB Dlabs? >>> What is the team behind I-venture @ ISB? >>> and more", - theme="compact").launch(share=True, debug=True) \ No newline at end of file diff --git a/spaces/Illia56/Youtube-Whisper-Llama/app.py b/spaces/Illia56/Youtube-Whisper-Llama/app.py deleted file mode 100644 index 5ecbd5c962565d7e66a29eae18742ce4c2ccac1b..0000000000000000000000000000000000000000 --- a/spaces/Illia56/Youtube-Whisper-Llama/app.py +++ /dev/null @@ -1,207 +0,0 @@ -import os -import logging -from typing import Any, List, Mapping, Optional -from langchain.llms import HuggingFaceHub -from gradio_client import Client -from langchain.schema import Document -from langchain.text_splitter import RecursiveCharacterTextSplitter -from langchain.vectorstores import FAISS -from langchain.embeddings.huggingface import HuggingFaceEmbeddings -from langchain.callbacks.manager import CallbackManagerForLLMRun -from langchain.llms.base import LLM -from langchain.chains import RetrievalQA -from langchain.prompts import PromptTemplate -import streamlit as st -from pytube import YouTube -# import replicate - -DESCRIPTION = """ -
-
Operating SystemWindows 10 (64-bit)macOS 10.14 or higher
CPUIntel Core i5 or equivalent AMD processorIntel Core i5 or equivalent Apple processor
RAM4 GB (8 GB recommended)4 GB (8 GB recommended)
Disk Space1 GB for Guitar Rig 6 Player
3 GB for Guitar Rig 6 Pro
1 GB for Guitar Rig 6 Player
3 GB for Guitar Rig 6 Pro
Graphics CardNVIDIA GeForce GTX 600 series or higher
AMD Radeon HD 7000 series or higher
Intel HD Graphics 4000 or higher
NVIDIA GeForce GTX 600 series or higher
AMD Radeon HD 7000 series or higher
Intel HD Graphics 4000 or higher
Audio InterfaceA dedicated audio interface with ASIO driver support is recommended for optimal performance and low latency.A dedicated audio interface with Core Audio driver support is recommended for optimal performance and low latency.
MIDI DeviceA MIDI device such as a footswitch, pedal, keyboard, controller, etc., is optional but recommended for controlling Guitar Rig 6 parameters in real time.A MIDI device such as a footswitch, pedal, keyboard, controller, etc., is optional but recommended for controlling Guitar Rig 6 parameters in real time.
MétodoDepósitoRetiro
Transferencia bancaria
Tarjeta de crédito/débitoNo
EFTNo
OzowNo
Peach PaymentsNo
ZapperNo
VoucherNo
Ramas de Hollywoodbets
Tarjeta de cajero automático HollywoodbetsNo
Hollywoodbets eWallet (FNB)No
Hollywoodbets Instant Money (Standard Bank)No
Envío de efectivo de Hollywoodbets (Absa)No
Hollywoodbets Cash Send Plus (Nedbank)No
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
ModelLlama2Llama2-hfLlama2-chatLlama2-chat-hf
7BLinkLinkLinkLink
13BLinkLinkLinkLink
70BLinkLinkLinkLink
- -openai/whisper-large-v3 -""" - -models = { - "Llama2-70b": { - "model_link": "https://huggingface.co/meta-llama/Llama-2-70b", - "chat_link": "https://ysharma-explore-llamav2-with-tgi.hf.space/", - }, - "Llama2-13b": { - "model_link": "https://huggingface.co/meta-llama/Llama-2-13b", - "chat_link": "https://huggingface-projects-llama-2-13b-chat.hf.space/", - } -} - -DESCRIPTION = """ -Welcome to the **YouTube Video Chatbot** powered by Llama-2 models. Here's what you can do: -- **Transcribe & Understand**: Provide any YouTube video URL, and our system will transcribe it. Our advanced NLP model will then understand the content, ready to answer your questions. -- **Ask Anything**: Based on the video's content, ask any question, and get instant, context-aware answers. -To get started, simply paste a YouTube video URL and select a model in the sidebar, then start chatting with the model about the video's content. Enjoy the experience! -""" -st.title("YouTube Video Chatbot") -st.markdown(DESCRIPTION) - -def get_video_title(youtube_url: str) -> str: - yt = YouTube(youtube_url) - embed_url = f"https://www.youtube.com/embed/{yt.video_id}" - embed_html = f'' - return yt.title, embed_html - -def transcribe_video(youtube_url: str, path: str) -> List[Document]: - """ - Transcribe a video and return its content as a Document. - """ - logging.info(f"Transcribing video: {youtube_url}") - client = Client("https://sanchit-gandhi-whisper-jax.hf.space/") - result = client.predict(youtube_url, "translate", True, fn_index=7) - return [Document(page_content=result[1], metadata=dict(page=1))] - -def predict( - message: str, system_prompt: str = "", model_url: str = models["Llama2-70b"]["chat_link"] -) -> Any: - """ - Predict a response using a client. - """ - client = Client(model_url) - response = client.predict(message, system_prompt, 0.7, 4096, 0.5, 1.2, api_name="/chat_1") - return response - -PATH = os.path.join(os.path.expanduser("~"), "Data") - -def initialize_session_state(): - if "youtube_url" not in st.session_state: - st.session_state.youtube_url = "" - if "model_choice" not in st.session_state: - st.session_state.model_choice = "Llama2-70b" - if "setup_done" not in st.session_state: - st.session_state.setup_done = False - if "doneYoutubeurl" not in st.session_state: - st.session_state.doneYoutubeurl = "" - -def sidebar(): - with st.sidebar: - st.markdown("Enter the YouTube Video URL below🔗") - st.session_state.youtube_url = st.text_input("YouTube Video URL:") - - model_choice = st.radio("Choose a Model:", list(models.keys())) - st.session_state.model_choice = model_choice - - if st.session_state.youtube_url: - # Get the video title - video_title, embed_html = get_video_title(st.session_state.youtube_url) - st.markdown(f"### {video_title}") - - # Embed the video - st.markdown(embed_html, unsafe_allow_html=True) - - - -sidebar() -initialize_session_state() - -text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=0) -embeddings = HuggingFaceEmbeddings(model_name="sentence-transformers/all-MiniLM-l6-v2") - -prompt = PromptTemplate( - template="""Given the context about a video. Answer the user in a friendly and precise manner. - Context: {context} - Human: {question} - AI:""", - input_variables=["context", "question"] -) - -class LlamaLLM(LLM): - """ - Custom LLM class. - """ - - @property - def _llm_type(self) -> str: - return "custom" - - def _call(self, prompt: str, stop: Optional[List[str]] = None, run_manager: Optional[CallbackManagerForLLMRun] = None) -> str: - model_link = models[st.session_state.model_choice]["chat_link"] - response = predict(prompt, model_url=model_link) - return response - - @property - def _identifying_params(self) -> Mapping[str, Any]: - """Get the identifying parameters.""" - return {} - -# Check if a new YouTube URL is provided -if st.session_state.youtube_url != st.session_state.doneYoutubeurl: - st.session_state.setup_done = False - -if st.session_state.youtube_url and not st.session_state.setup_done: - with st.status("Transcribing video..."): - data = transcribe_video(st.session_state.youtube_url, PATH) - - with st.status("Running Embeddings..."): - docs = text_splitter.split_documents(data) - - docsearch = FAISS.from_documents(docs, embeddings) - retriever = docsearch.as_retriever() - retriever.search_kwargs["distance_metric"] = "cos" - retriever.search_kwargs["k"] = 4 - with st.status("Running RetrievalQA..."): - llama_instance = LlamaLLM() - st.session_state.qa = RetrievalQA.from_chain_type(llm=llama_instance, chain_type="stuff", retriever=retriever, chain_type_kwargs={"prompt": prompt}) - - st.session_state.doneYoutubeurl = st.session_state.youtube_url - st.session_state.setup_done = True # Mark the setup as done for this URL - -if "messages" not in st.session_state: - st.session_state.messages = [] - -for message in st.session_state.messages: - with st.chat_message(message["role"], avatar=("🧑‍💻" if message["role"] == "human" else "🦙")): - st.markdown(message["content"]) - -textinput = st.chat_input("Ask anything about the video...") - -if prompt := textinput: - st.chat_message("human", avatar="🧑‍💻").markdown(prompt) - st.session_state.messages.append({"role": "human", "content": prompt}) - with st.status("Requesting Client..."): - video_title, _ = get_video_title(st.session_state.youtube_url) - additional_context = f"Given the context about a video titled '{video_title}' available at '{st.session_state.youtube_url}'." - response = st.session_state.qa.run(prompt + " " + additional_context) - with st.chat_message("assistant", avatar="🦙"): - st.markdown(response) - # Add assistant response to chat history - st.session_state.messages.append({"role": "assistant", "content": response}) diff --git a/spaces/Illumotion/Koboldcpp/ggml-cuda.h b/spaces/Illumotion/Koboldcpp/ggml-cuda.h deleted file mode 100644 index fda704b6656234ae63e1399fc680c5e5cf6c4a0d..0000000000000000000000000000000000000000 --- a/spaces/Illumotion/Koboldcpp/ggml-cuda.h +++ /dev/null @@ -1,47 +0,0 @@ -#pragma once - -#include "ggml.h" - -#ifdef GGML_USE_HIPBLAS -#define GGML_CUDA_NAME "ROCm" -#define GGML_CUBLAS_NAME "hipBLAS" -#else -#define GGML_CUDA_NAME "CUDA" -#define GGML_CUBLAS_NAME "cuBLAS" -#endif - -#ifdef __cplusplus -extern "C" { -#endif - -#define GGML_CUDA_MAX_DEVICES 16 - -GGML_API void ggml_init_cublas(void); -GGML_API void * ggml_cuda_host_malloc(size_t size); -GGML_API void ggml_cuda_host_free(void * ptr); - -GGML_API bool ggml_cuda_can_mul_mat(const struct ggml_tensor * src0, const struct ggml_tensor * src1, struct ggml_tensor * dst); -GGML_API void ggml_cuda_set_tensor_split(const float * tensor_split); -GGML_API void ggml_cuda_transform_tensor(void * data, struct ggml_tensor * tensor); -GGML_API void ggml_cuda_free_data(struct ggml_tensor * tensor); - -GGML_API void ggml_cuda_assign_buffers(struct ggml_tensor * tensor); -GGML_API void ggml_cuda_assign_buffers_no_scratch(struct ggml_tensor * tensor); -GGML_API void ggml_cuda_assign_buffers_force_inplace(struct ggml_tensor * tensor); - -GGML_API void ggml_cuda_assign_buffers_no_alloc(struct ggml_tensor * tensor); -GGML_API void ggml_cuda_assign_scratch_offset(struct ggml_tensor * tensor, size_t offset); -GGML_API void ggml_cuda_copy_to_device(struct ggml_tensor * tensor); - -GGML_API void ggml_cuda_set_main_device(int main_device); -GGML_API void ggml_cuda_set_mul_mat_q(bool mul_mat_q); -GGML_API void ggml_cuda_set_scratch_size(size_t scratch_size); -GGML_API void ggml_cuda_free_scratch(void); -GGML_API bool ggml_cuda_compute_forward(struct ggml_compute_params * params, struct ggml_tensor * tensor); - -GGML_API int ggml_cuda_get_device_count(void); -GGML_API void ggml_cuda_get_device_description(int device, char * description, size_t description_size); - -#ifdef __cplusplus -} -#endif diff --git a/spaces/InpaintAI/Inpaint-Anything/third_party/lama/saicinpainting/evaluation/losses/fid/__init__.py b/spaces/InpaintAI/Inpaint-Anything/third_party/lama/saicinpainting/evaluation/losses/fid/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/Jasonyoyo/CodeFormer/CodeFormer/facelib/parsing/parsenet.py b/spaces/Jasonyoyo/CodeFormer/CodeFormer/facelib/parsing/parsenet.py deleted file mode 100644 index e178ebe43a1ef666aaea0bc0faf629485c22a24f..0000000000000000000000000000000000000000 --- a/spaces/Jasonyoyo/CodeFormer/CodeFormer/facelib/parsing/parsenet.py +++ /dev/null @@ -1,194 +0,0 @@ -"""Modified from https://github.com/chaofengc/PSFRGAN -""" -import numpy as np -import torch.nn as nn -from torch.nn import functional as F - - -class NormLayer(nn.Module): - """Normalization Layers. - - Args: - channels: input channels, for batch norm and instance norm. - input_size: input shape without batch size, for layer norm. - """ - - def __init__(self, channels, normalize_shape=None, norm_type='bn'): - super(NormLayer, self).__init__() - norm_type = norm_type.lower() - self.norm_type = norm_type - if norm_type == 'bn': - self.norm = nn.BatchNorm2d(channels, affine=True) - elif norm_type == 'in': - self.norm = nn.InstanceNorm2d(channels, affine=False) - elif norm_type == 'gn': - self.norm = nn.GroupNorm(32, channels, affine=True) - elif norm_type == 'pixel': - self.norm = lambda x: F.normalize(x, p=2, dim=1) - elif norm_type == 'layer': - self.norm = nn.LayerNorm(normalize_shape) - elif norm_type == 'none': - self.norm = lambda x: x * 1.0 - else: - assert 1 == 0, f'Norm type {norm_type} not support.' - - def forward(self, x, ref=None): - if self.norm_type == 'spade': - return self.norm(x, ref) - else: - return self.norm(x) - - -class ReluLayer(nn.Module): - """Relu Layer. - - Args: - relu type: type of relu layer, candidates are - - ReLU - - LeakyReLU: default relu slope 0.2 - - PRelu - - SELU - - none: direct pass - """ - - def __init__(self, channels, relu_type='relu'): - super(ReluLayer, self).__init__() - relu_type = relu_type.lower() - if relu_type == 'relu': - self.func = nn.ReLU(True) - elif relu_type == 'leakyrelu': - self.func = nn.LeakyReLU(0.2, inplace=True) - elif relu_type == 'prelu': - self.func = nn.PReLU(channels) - elif relu_type == 'selu': - self.func = nn.SELU(True) - elif relu_type == 'none': - self.func = lambda x: x * 1.0 - else: - assert 1 == 0, f'Relu type {relu_type} not support.' - - def forward(self, x): - return self.func(x) - - -class ConvLayer(nn.Module): - - def __init__(self, - in_channels, - out_channels, - kernel_size=3, - scale='none', - norm_type='none', - relu_type='none', - use_pad=True, - bias=True): - super(ConvLayer, self).__init__() - self.use_pad = use_pad - self.norm_type = norm_type - if norm_type in ['bn']: - bias = False - - stride = 2 if scale == 'down' else 1 - - self.scale_func = lambda x: x - if scale == 'up': - self.scale_func = lambda x: nn.functional.interpolate(x, scale_factor=2, mode='nearest') - - self.reflection_pad = nn.ReflectionPad2d(int(np.ceil((kernel_size - 1.) / 2))) - self.conv2d = nn.Conv2d(in_channels, out_channels, kernel_size, stride, bias=bias) - - self.relu = ReluLayer(out_channels, relu_type) - self.norm = NormLayer(out_channels, norm_type=norm_type) - - def forward(self, x): - out = self.scale_func(x) - if self.use_pad: - out = self.reflection_pad(out) - out = self.conv2d(out) - out = self.norm(out) - out = self.relu(out) - return out - - -class ResidualBlock(nn.Module): - """ - Residual block recommended in: http://torch.ch/blog/2016/02/04/resnets.html - """ - - def __init__(self, c_in, c_out, relu_type='prelu', norm_type='bn', scale='none'): - super(ResidualBlock, self).__init__() - - if scale == 'none' and c_in == c_out: - self.shortcut_func = lambda x: x - else: - self.shortcut_func = ConvLayer(c_in, c_out, 3, scale) - - scale_config_dict = {'down': ['none', 'down'], 'up': ['up', 'none'], 'none': ['none', 'none']} - scale_conf = scale_config_dict[scale] - - self.conv1 = ConvLayer(c_in, c_out, 3, scale_conf[0], norm_type=norm_type, relu_type=relu_type) - self.conv2 = ConvLayer(c_out, c_out, 3, scale_conf[1], norm_type=norm_type, relu_type='none') - - def forward(self, x): - identity = self.shortcut_func(x) - - res = self.conv1(x) - res = self.conv2(res) - return identity + res - - -class ParseNet(nn.Module): - - def __init__(self, - in_size=128, - out_size=128, - min_feat_size=32, - base_ch=64, - parsing_ch=19, - res_depth=10, - relu_type='LeakyReLU', - norm_type='bn', - ch_range=[32, 256]): - super().__init__() - self.res_depth = res_depth - act_args = {'norm_type': norm_type, 'relu_type': relu_type} - min_ch, max_ch = ch_range - - ch_clip = lambda x: max(min_ch, min(x, max_ch)) # noqa: E731 - min_feat_size = min(in_size, min_feat_size) - - down_steps = int(np.log2(in_size // min_feat_size)) - up_steps = int(np.log2(out_size // min_feat_size)) - - # =============== define encoder-body-decoder ==================== - self.encoder = [] - self.encoder.append(ConvLayer(3, base_ch, 3, 1)) - head_ch = base_ch - for i in range(down_steps): - cin, cout = ch_clip(head_ch), ch_clip(head_ch * 2) - self.encoder.append(ResidualBlock(cin, cout, scale='down', **act_args)) - head_ch = head_ch * 2 - - self.body = [] - for i in range(res_depth): - self.body.append(ResidualBlock(ch_clip(head_ch), ch_clip(head_ch), **act_args)) - - self.decoder = [] - for i in range(up_steps): - cin, cout = ch_clip(head_ch), ch_clip(head_ch // 2) - self.decoder.append(ResidualBlock(cin, cout, scale='up', **act_args)) - head_ch = head_ch // 2 - - self.encoder = nn.Sequential(*self.encoder) - self.body = nn.Sequential(*self.body) - self.decoder = nn.Sequential(*self.decoder) - self.out_img_conv = ConvLayer(ch_clip(head_ch), 3) - self.out_mask_conv = ConvLayer(ch_clip(head_ch), parsing_ch) - - def forward(self, x): - feat = self.encoder(x) - x = feat + self.body(feat) - x = self.decoder(x) - out_img = self.out_img_conv(x) - out_mask = self.out_mask_conv(x) - return out_mask, out_img diff --git a/spaces/JoeyFoursheds/ClonerHug/infer_pack/models.py b/spaces/JoeyFoursheds/ClonerHug/infer_pack/models.py deleted file mode 100644 index 96165f73644e6fb92d0ffedb4a3c9e1a457cb989..0000000000000000000000000000000000000000 --- a/spaces/JoeyFoursheds/ClonerHug/infer_pack/models.py +++ /dev/null @@ -1,982 +0,0 @@ -import math, pdb, os -from time import time as ttime -import torch -from torch import nn -from torch.nn import functional as F -from infer_pack import modules -from infer_pack import attentions -from infer_pack import commons -from infer_pack.commons import init_weights, get_padding -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm -from infer_pack.commons import init_weights -import numpy as np -from infer_pack import commons - - -class TextEncoder256(nn.Module): - def __init__( - self, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=True, - ): - super().__init__() - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.emb_phone = nn.Linear(256, hidden_channels) - self.lrelu = nn.LeakyReLU(0.1, inplace=True) - if f0 == True: - self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256 - self.encoder = attentions.Encoder( - hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, phone, pitch, lengths): - if pitch == None: - x = self.emb_phone(phone) - else: - x = self.emb_phone(phone) + self.emb_pitch(pitch) - x = x * math.sqrt(self.hidden_channels) # [b, t, h] - x = self.lrelu(x) - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.encoder(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - - m, logs = torch.split(stats, self.out_channels, dim=1) - return m, logs, x_mask - - -class TextEncoder256Sim(nn.Module): - def __init__( - self, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=True, - ): - super().__init__() - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.emb_phone = nn.Linear(256, hidden_channels) - self.lrelu = nn.LeakyReLU(0.1, inplace=True) - if f0 == True: - self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256 - self.encoder = attentions.Encoder( - hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout - ) - self.proj = nn.Conv1d(hidden_channels, out_channels, 1) - - def forward(self, phone, pitch, lengths): - if pitch == None: - x = self.emb_phone(phone) - else: - x = self.emb_phone(phone) + self.emb_pitch(pitch) - x = x * math.sqrt(self.hidden_channels) # [b, t, h] - x = self.lrelu(x) - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.encoder(x * x_mask, x_mask) - x = self.proj(x) * x_mask - return x, x_mask - - -class ResidualCouplingBlock(nn.Module): - def __init__( - self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - n_flows=4, - gin_channels=0, - ): - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.flows = nn.ModuleList() - for i in range(n_flows): - self.flows.append( - modules.ResidualCouplingLayer( - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=gin_channels, - mean_only=True, - ) - ) - self.flows.append(modules.Flip()) - - def forward(self, x, x_mask, g=None, reverse=False): - if not reverse: - for flow in self.flows: - x, _ = flow(x, x_mask, g=g, reverse=reverse) - else: - for flow in reversed(self.flows): - x = flow(x, x_mask, g=g, reverse=reverse) - return x - - def remove_weight_norm(self): - for i in range(self.n_flows): - self.flows[i * 2].remove_weight_norm() - - -class PosteriorEncoder(nn.Module): - def __init__( - self, - in_channels, - out_channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0, - ): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - - self.pre = nn.Conv1d(in_channels, hidden_channels, 1) - self.enc = modules.WN( - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=gin_channels, - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths, g=None): - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.pre(x) * x_mask - x = self.enc(x, x_mask, g=g) - stats = self.proj(x) * x_mask - m, logs = torch.split(stats, self.out_channels, dim=1) - z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask - return z, m, logs, x_mask - - def remove_weight_norm(self): - self.enc.remove_weight_norm() - - -class Generator(torch.nn.Module): - def __init__( - self, - initial_channel, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=0, - ): - super(Generator, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - self.conv_pre = Conv1d( - initial_channel, upsample_initial_channel, 7, 1, padding=3 - ) - resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - self.ups.append( - weight_norm( - ConvTranspose1d( - upsample_initial_channel // (2**i), - upsample_initial_channel // (2 ** (i + 1)), - k, - u, - padding=(k - u) // 2, - ) - ) - ) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel // (2 ** (i + 1)) - for j, (k, d) in enumerate( - zip(resblock_kernel_sizes, resblock_dilation_sizes) - ): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - def forward(self, x, g=None): - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - - return x - - def remove_weight_norm(self): - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -class SineGen(torch.nn.Module): - """Definition of sine generator - SineGen(samp_rate, harmonic_num = 0, - sine_amp = 0.1, noise_std = 0.003, - voiced_threshold = 0, - flag_for_pulse=False) - samp_rate: sampling rate in Hz - harmonic_num: number of harmonic overtones (default 0) - sine_amp: amplitude of sine-wavefrom (default 0.1) - noise_std: std of Gaussian noise (default 0.003) - voiced_thoreshold: F0 threshold for U/V classification (default 0) - flag_for_pulse: this SinGen is used inside PulseGen (default False) - Note: when flag_for_pulse is True, the first time step of a voiced - segment is always sin(np.pi) or cos(0) - """ - - def __init__( - self, - samp_rate, - harmonic_num=0, - sine_amp=0.1, - noise_std=0.003, - voiced_threshold=0, - flag_for_pulse=False, - ): - super(SineGen, self).__init__() - self.sine_amp = sine_amp - self.noise_std = noise_std - self.harmonic_num = harmonic_num - self.dim = self.harmonic_num + 1 - self.sampling_rate = samp_rate - self.voiced_threshold = voiced_threshold - - def _f02uv(self, f0): - # generate uv signal - uv = torch.ones_like(f0) - uv = uv * (f0 > self.voiced_threshold) - return uv - - def forward(self, f0, upp): - """sine_tensor, uv = forward(f0) - input F0: tensor(batchsize=1, length, dim=1) - f0 for unvoiced steps should be 0 - output sine_tensor: tensor(batchsize=1, length, dim) - output uv: tensor(batchsize=1, length, 1) - """ - with torch.no_grad(): - f0 = f0[:, None].transpose(1, 2) - f0_buf = torch.zeros(f0.shape[0], f0.shape[1], self.dim, device=f0.device) - # fundamental component - f0_buf[:, :, 0] = f0[:, :, 0] - for idx in np.arange(self.harmonic_num): - f0_buf[:, :, idx + 1] = f0_buf[:, :, 0] * ( - idx + 2 - ) # idx + 2: the (idx+1)-th overtone, (idx+2)-th harmonic - rad_values = (f0_buf / self.sampling_rate) % 1 ###%1意味着n_har的乘积无法后处理优化 - rand_ini = torch.rand( - f0_buf.shape[0], f0_buf.shape[2], device=f0_buf.device - ) - rand_ini[:, 0] = 0 - rad_values[:, 0, :] = rad_values[:, 0, :] + rand_ini - tmp_over_one = torch.cumsum(rad_values, 1) # % 1 #####%1意味着后面的cumsum无法再优化 - tmp_over_one *= upp - tmp_over_one = F.interpolate( - tmp_over_one.transpose(2, 1), - scale_factor=upp, - mode="linear", - align_corners=True, - ).transpose(2, 1) - rad_values = F.interpolate( - rad_values.transpose(2, 1), scale_factor=upp, mode="nearest" - ).transpose( - 2, 1 - ) ####### - tmp_over_one %= 1 - tmp_over_one_idx = (tmp_over_one[:, 1:, :] - tmp_over_one[:, :-1, :]) < 0 - cumsum_shift = torch.zeros_like(rad_values) - cumsum_shift[:, 1:, :] = tmp_over_one_idx * -1.0 - sine_waves = torch.sin( - torch.cumsum(rad_values + cumsum_shift, dim=1) * 2 * np.pi - ) - sine_waves = sine_waves * self.sine_amp - uv = self._f02uv(f0) - uv = F.interpolate( - uv.transpose(2, 1), scale_factor=upp, mode="nearest" - ).transpose(2, 1) - noise_amp = uv * self.noise_std + (1 - uv) * self.sine_amp / 3 - noise = noise_amp * torch.randn_like(sine_waves) - sine_waves = sine_waves * uv + noise - return sine_waves, uv, noise - - -class SourceModuleHnNSF(torch.nn.Module): - """SourceModule for hn-nsf - SourceModule(sampling_rate, harmonic_num=0, sine_amp=0.1, - add_noise_std=0.003, voiced_threshod=0) - sampling_rate: sampling_rate in Hz - harmonic_num: number of harmonic above F0 (default: 0) - sine_amp: amplitude of sine source signal (default: 0.1) - add_noise_std: std of additive Gaussian noise (default: 0.003) - note that amplitude of noise in unvoiced is decided - by sine_amp - voiced_threshold: threhold to set U/V given F0 (default: 0) - Sine_source, noise_source = SourceModuleHnNSF(F0_sampled) - F0_sampled (batchsize, length, 1) - Sine_source (batchsize, length, 1) - noise_source (batchsize, length 1) - uv (batchsize, length, 1) - """ - - def __init__( - self, - sampling_rate, - harmonic_num=0, - sine_amp=0.1, - add_noise_std=0.003, - voiced_threshod=0, - is_half=True, - ): - super(SourceModuleHnNSF, self).__init__() - - self.sine_amp = sine_amp - self.noise_std = add_noise_std - self.is_half = is_half - # to produce sine waveforms - self.l_sin_gen = SineGen( - sampling_rate, harmonic_num, sine_amp, add_noise_std, voiced_threshod - ) - - # to merge source harmonics into a single excitation - self.l_linear = torch.nn.Linear(harmonic_num + 1, 1) - self.l_tanh = torch.nn.Tanh() - - def forward(self, x, upp=None): - sine_wavs, uv, _ = self.l_sin_gen(x, upp) - if self.is_half: - sine_wavs = sine_wavs.half() - sine_merge = self.l_tanh(self.l_linear(sine_wavs)) - return sine_merge, None, None # noise, uv - - -class GeneratorNSF(torch.nn.Module): - def __init__( - self, - initial_channel, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels, - sr, - is_half=False, - ): - super(GeneratorNSF, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - - self.f0_upsamp = torch.nn.Upsample(scale_factor=np.prod(upsample_rates)) - self.m_source = SourceModuleHnNSF( - sampling_rate=sr, harmonic_num=0, is_half=is_half - ) - self.noise_convs = nn.ModuleList() - self.conv_pre = Conv1d( - initial_channel, upsample_initial_channel, 7, 1, padding=3 - ) - resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - c_cur = upsample_initial_channel // (2 ** (i + 1)) - self.ups.append( - weight_norm( - ConvTranspose1d( - upsample_initial_channel // (2**i), - upsample_initial_channel // (2 ** (i + 1)), - k, - u, - padding=(k - u) // 2, - ) - ) - ) - if i + 1 < len(upsample_rates): - stride_f0 = np.prod(upsample_rates[i + 1 :]) - self.noise_convs.append( - Conv1d( - 1, - c_cur, - kernel_size=stride_f0 * 2, - stride=stride_f0, - padding=stride_f0 // 2, - ) - ) - else: - self.noise_convs.append(Conv1d(1, c_cur, kernel_size=1)) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel // (2 ** (i + 1)) - for j, (k, d) in enumerate( - zip(resblock_kernel_sizes, resblock_dilation_sizes) - ): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - self.upp = np.prod(upsample_rates) - - def forward(self, x, f0, g=None): - har_source, noi_source, uv = self.m_source(f0, self.upp) - har_source = har_source.transpose(1, 2) - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - x_source = self.noise_convs[i](har_source) - x = x + x_source - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - return x - - def remove_weight_norm(self): - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -sr2sr = { - "32k": 32000, - "40k": 40000, - "48k": 48000, -} - - -class SynthesizerTrnMs256NSFsid(nn.Module): - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - gin_channels, - sr, - **kwargs - ): - super().__init__() - if type(sr) == type("strr"): - sr = sr2sr[sr] - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - self.enc_p = TextEncoder256( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - ) - self.dec = GeneratorNSF( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - sr=sr, - is_half=kwargs["is_half"], - ) - self.enc_q = PosteriorEncoder( - spec_channels, - inter_channels, - hidden_channels, - 5, - 1, - 16, - gin_channels=gin_channels, - ) - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def forward( - self, phone, phone_lengths, pitch, pitchf, y, y_lengths, ds - ): # 这里ds是id,[bs,1] - # print(1,pitch.shape)#[bs,t] - g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的 - m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - z_slice, ids_slice = commons.rand_slice_segments( - z, y_lengths, self.segment_size - ) - # print(-1,pitchf.shape,ids_slice,self.segment_size,self.hop_length,self.segment_size//self.hop_length) - pitchf = commons.slice_segments2(pitchf, ids_slice, self.segment_size) - # print(-2,pitchf.shape,z_slice.shape) - o = self.dec(z_slice, pitchf, g=g) - return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q) - - def infer(self, phone, phone_lengths, pitch, nsff0, sid, max_len=None): - g = self.emb_g(sid).unsqueeze(-1) - m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths) - z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask - z = self.flow(z_p, x_mask, g=g, reverse=True) - o = self.dec((z * x_mask)[:, :, :max_len], nsff0, g=g) - return o, x_mask, (z, z_p, m_p, logs_p) - - -class SynthesizerTrnMs256NSFsid_nono(nn.Module): - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - gin_channels, - sr=None, - **kwargs - ): - super().__init__() - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - self.enc_p = TextEncoder256( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=False, - ) - self.dec = Generator( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - ) - self.enc_q = PosteriorEncoder( - spec_channels, - inter_channels, - hidden_channels, - 5, - 1, - 16, - gin_channels=gin_channels, - ) - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def forward(self, phone, phone_lengths, y, y_lengths, ds): # 这里ds是id,[bs,1] - g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的 - m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - z_slice, ids_slice = commons.rand_slice_segments( - z, y_lengths, self.segment_size - ) - o = self.dec(z_slice, g=g) - return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q) - - def infer(self, phone, phone_lengths, sid, max_len=None): - g = self.emb_g(sid).unsqueeze(-1) - m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths) - z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask - z = self.flow(z_p, x_mask, g=g, reverse=True) - o = self.dec((z * x_mask)[:, :, :max_len], g=g) - return o, x_mask, (z, z_p, m_p, logs_p) - - -class SynthesizerTrnMs256NSFsid_sim(nn.Module): - """ - Synthesizer for Training - """ - - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - # hop_length, - gin_channels=0, - use_sdp=True, - **kwargs - ): - super().__init__() - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - self.enc_p = TextEncoder256Sim( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - ) - self.dec = GeneratorNSF( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - is_half=kwargs["is_half"], - ) - - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def forward( - self, phone, phone_lengths, pitch, pitchf, y_lengths, ds - ): # y是spec不需要了现在 - g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的 - x, x_mask = self.enc_p(phone, pitch, phone_lengths) - x = self.flow(x, x_mask, g=g, reverse=True) - z_slice, ids_slice = commons.rand_slice_segments( - x, y_lengths, self.segment_size - ) - - pitchf = commons.slice_segments2(pitchf, ids_slice, self.segment_size) - o = self.dec(z_slice, pitchf, g=g) - return o, ids_slice - - def infer( - self, phone, phone_lengths, pitch, pitchf, ds, max_len=None - ): # y是spec不需要了现在 - g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的 - x, x_mask = self.enc_p(phone, pitch, phone_lengths) - x = self.flow(x, x_mask, g=g, reverse=True) - o = self.dec((x * x_mask)[:, :, :max_len], pitchf, g=g) - return o, o - - -class MultiPeriodDiscriminator(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminator, self).__init__() - periods = [2, 3, 5, 7, 11, 17] - # periods = [3, 5, 7, 11, 17, 23, 37] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [ - DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods - ] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] # - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - # for j in range(len(fmap_r)): - # print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -class DiscriminatorS(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(DiscriminatorS, self).__init__() - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList( - [ - norm_f(Conv1d(1, 16, 15, 1, padding=7)), - norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)), - norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)), - norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)), - norm_f(Conv1d(1024, 1024, 5, 1, padding=2)), - ] - ) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x): - fmap = [] - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False): - super(DiscriminatorP, self).__init__() - self.period = period - self.use_spectral_norm = use_spectral_norm - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList( - [ - norm_f( - Conv2d( - 1, - 32, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 32, - 128, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 128, - 512, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 512, - 1024, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 1024, - 1024, - (kernel_size, 1), - 1, - padding=(get_padding(kernel_size, 1), 0), - ) - ), - ] - ) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x): - fmap = [] - - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap diff --git a/spaces/JoeyFoursheds/ClonerHug/model_prepare.py b/spaces/JoeyFoursheds/ClonerHug/model_prepare.py deleted file mode 100644 index 78c162a8db541a3b76d27e2d31a39b729a430064..0000000000000000000000000000000000000000 --- a/spaces/JoeyFoursheds/ClonerHug/model_prepare.py +++ /dev/null @@ -1,89 +0,0 @@ -import torch -from fairseq import checkpoint_utils -import json -from infer_pack.models import SynthesizerTrnMs256NSFsid, SynthesizerTrnMs256NSFsid_nono -from vc_infer_pipeline import VC -from config import ( - is_half, - device -) - -def get_model_info(model_name): - with open("weights/model_info.json", "r", encoding="utf-8") as f: - models_info = json.load(f) - # For now, get first item - name, info = next(iter(models_info.items())) - name = model_name - # name = "Girl_1" - info = models_info[name] - - title = "Voice Model Image:   " + "GPT-4's Grandmother" - author = info.get("author", None) - image = f"weights/{name}/{info['image']}" - feature_retrieval_index = f"weights/{name}/{info['feature_retrieval_library']}" - npy = f"weights/{name}/{info['feature_file']}" - checkpoint = torch.load(f"weights/{name}/{name}.pth", map_location="cpu") - target_sr = checkpoint["config"][-1] - checkpoint["config"][-3] = checkpoint["weight"]["emb_g.weight"].shape[0] # n_spk - if_fund_freq = checkpoint.get("f0", 1) - if if_fund_freq == 1: - gen_model = SynthesizerTrnMs256NSFsid(*checkpoint["config"], is_half=is_half) - else: - gen_model = SynthesizerTrnMs256NSFsid_nono(*checkpoint["config"]) - del gen_model.enc_q - print(gen_model.load_state_dict(checkpoint["weight"], strict=False)) - gen_model.eval().to(device) - if is_half: - gen_model = gen_model.half() - else: - gen_model = gen_model.float() - - return name, title, author, image, feature_retrieval_index, npy, target_sr, if_fund_freq, is_half, gen_model - -# make a struct to hold all the info returned from get_model_info -class ModelInfo: - def __init__(self, name, title, author, image, feature_retrieval_index, npy, target_sr, if_fund_freq, is_half, gen_model): - self.name = name - self.title = title - self.author = author - self.image = image - self.feature_retrieval_index = feature_retrieval_index - self.npy = npy - self.target_sr = target_sr - self.if_fund_freq = if_fund_freq - self.is_half = is_half - self.gen_model = gen_model - -def unpack_model_info(model_info): - return model_info.name, model_info.title, model_info.author, model_info.image, model_info.gen_model - - -def get_all_model_infos(): - model_infos = [] - with open("weights/model_info.json", "r", encoding="utf-8") as f: - models_info = json.load(f) - #loop through all models in model_info.json - for name, info in models_info.items(): - name, title, author, image, feature_retrieval_index, npy, target_sr, if_fund_freq, is_half, gen_model = get_model_info(model_name=name) - model_infos.append(ModelInfo(name, title, author, image, feature_retrieval_index, npy, target_sr, if_fund_freq, is_half, gen_model)) - # print ("\nname: ", name, "\n") - return model_infos - - - -def load_hubert(): - global hubert_model - models, _, _ = checkpoint_utils.load_model_ensemble_and_task( - ["hubert_base.pt"], - suffix="", - ) - hubert_model = models[0] - hubert_model = hubert_model.to(device) - if is_half: - hubert_model = hubert_model.half() - else: - hubert_model = hubert_model.float() - hubert_model.eval() - return hubert_model - - diff --git a/spaces/JohnC26/MN.Map.Hospitals.Top.Five/app.py b/spaces/JohnC26/MN.Map.Hospitals.Top.Five/app.py deleted file mode 100644 index 884f7d88b148b64d80dd3af93f7dcade28a72611..0000000000000000000000000000000000000000 --- a/spaces/JohnC26/MN.Map.Hospitals.Top.Five/app.py +++ /dev/null @@ -1,54 +0,0 @@ -import streamlit as st -import folium -from streamlit_folium import folium_static - -# Define hospitals data for Minnesota -hospitals = [('Mayo Clinic', 'Rochester', 44.023678, -92.466955), - ('University of Minnesota Medical Center', 'Minneapolis', 44.971389, -93.240556), - ('Hennepin County Medical Center', 'Minneapolis', 44.972078, -93.261769), - ('Regions Hospital', 'St. Paul', 44.942936, -93.093457), - ('Abbott Northwestern Hospital', 'Minneapolis', 44.955447, -93.268543)] - -# Create a map centered on Minnesota -m = folium.Map(location=[45.0, -94.0], zoom_start=7) - -# Add markers for each hospital -for hospital in hospitals: - folium.Marker( - location=[hospital[2], hospital[3]], - popup=f'{hospital[0]}
{hospital[1]}', - icon=folium.Icon(color='red') - ).add_to(m) - -# Add waypoints for each hospital -waypoints = [(hospital[2], hospital[3]) for hospital in hospitals] -folium.plugins.AntPath(waypoints, delay=3000).add_to(m) - -# Function to update the map when a button is clicked -def update_map(hospital_data): - m.location = [hospital_data[2], hospital_data[3]] - m.zoom_start = 13 - folium_static(m) - -# Create a grid of buttons for selecting hospitals -col1, col2, col3 = st.columns(3) -with col1: - if st.button(hospitals[0][0]): - update_map(hospitals[0]) -with col2: - if st.button(hospitals[1][0]): - update_map(hospitals[1]) -with col3: - if st.button(hospitals[2][0]): - update_map(hospitals[2]) - -col4, col5, col6 = st.columns(3) -with col4: - if st.button(hospitals[3][0]): - update_map(hospitals[3]) -with col5: - if st.button(hospitals[4][0]): - update_map(hospitals[4]) - -# Display the map in Streamlit -folium_static(m) diff --git a/spaces/Joom/Front-end-code-generation-from-images/app.py b/spaces/Joom/Front-end-code-generation-from-images/app.py deleted file mode 100644 index 73c5947e01a23996cbfdbc68b5a2fed427721160..0000000000000000000000000000000000000000 --- a/spaces/Joom/Front-end-code-generation-from-images/app.py +++ /dev/null @@ -1,38 +0,0 @@ -__author__ = 'Taneem Jan, taneemishere.github.io' - -import gradio as gr -import main_program - - -# our model's i/o method that take image from gradio interface's inputs.Image() -def model_interface(image): - return main_model(image) - - -# main method that call the main_program where code is generated and then compiled -def main_model(input_image): - result = main_program.main_method(input_image) - return result - - -interface_title = "

Front end Code Generation with Deep Neural Networks

" -interface_description = """

Input sketch image and select the framework -and click on submit to generate

""" - -interface_article = """

Creafted with care from Jude

""" - -interface_examples = ['examples/example-1.png', 'examples/example-2.png', 'examples/example-3.png'] - -# a gradio interface to convert a image to HTML Code -interface = gr.Interface( - model_interface, - inputs='image', - outputs='text', - allow_flagging="manual", - title=interface_title, - description=interface_description, - article=interface_article, - examples=interface_examples -) - -interface.launch(share=False) diff --git a/spaces/KevinGeng/Laronix_voice_quality_checking_system_FILEIO/lightning_module.py b/spaces/KevinGeng/Laronix_voice_quality_checking_system_FILEIO/lightning_module.py deleted file mode 100644 index 491426e492accf516713a4a7672b65ec4e831868..0000000000000000000000000000000000000000 --- a/spaces/KevinGeng/Laronix_voice_quality_checking_system_FILEIO/lightning_module.py +++ /dev/null @@ -1,41 +0,0 @@ -import pytorch_lightning as pl -import torch -import torch.nn as nn -import os -import numpy as np -import hydra -from model import load_ssl_model, PhonemeEncoder, DomainEmbedding, LDConditioner, Projection - - -class BaselineLightningModule(pl.LightningModule): - def __init__(self, cfg): - super().__init__() - self.cfg = cfg - self.construct_model() - self.save_hyperparameters() - - def construct_model(self): - self.feature_extractors = nn.ModuleList([ - load_ssl_model(cp_path='wav2vec_small.pt'), - DomainEmbedding(3,128), - ]) - output_dim = sum([ feature_extractor.get_output_dim() for feature_extractor in self.feature_extractors]) - output_layers = [ - LDConditioner(judge_dim=128,num_judges=3000,input_dim=output_dim) - ] - output_dim = output_layers[-1].get_output_dim() - output_layers.append( - Projection(hidden_dim=2048,activation=torch.nn.ReLU(),range_clipping=False,input_dim=output_dim) - - ) - - self.output_layers = nn.ModuleList(output_layers) - - def forward(self, inputs): - outputs = {} - for feature_extractor in self.feature_extractors: - outputs.update(feature_extractor(inputs)) - x = outputs - for output_layer in self.output_layers: - x = output_layer(x,inputs) - return x diff --git a/spaces/Keyurmistry/Joeythemonster-anything-midjourney-v-4-1/app.py b/spaces/Keyurmistry/Joeythemonster-anything-midjourney-v-4-1/app.py deleted file mode 100644 index 262436d8b50f87b0953c645576cc3184b3b27b43..0000000000000000000000000000000000000000 --- a/spaces/Keyurmistry/Joeythemonster-anything-midjourney-v-4-1/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/Joeythemonster/anything-midjourney-v-4-1").launch() \ No newline at end of file diff --git a/spaces/KyanChen/RSPrompter/mmdet/models/detectors/dino.py b/spaces/KyanChen/RSPrompter/mmdet/models/detectors/dino.py deleted file mode 100644 index a4385462affe70d0d7c7883cf1ce98da30c29036..0000000000000000000000000000000000000000 --- a/spaces/KyanChen/RSPrompter/mmdet/models/detectors/dino.py +++ /dev/null @@ -1,285 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from typing import Dict, Optional, Tuple - -import torch -from torch import Tensor, nn -from torch.nn.init import normal_ - -from mmdet.registry import MODELS -from mmdet.structures import OptSampleList -from mmdet.utils import OptConfigType -from ..layers import (CdnQueryGenerator, DeformableDetrTransformerEncoder, - DinoTransformerDecoder, SinePositionalEncoding) -from .deformable_detr import DeformableDETR, MultiScaleDeformableAttention - - -@MODELS.register_module() -class DINO(DeformableDETR): - r"""Implementation of `DINO: DETR with Improved DeNoising Anchor Boxes - for End-to-End Object Detection `_ - - Code is modified from the `official github repo - `_. - - Args: - dn_cfg (:obj:`ConfigDict` or dict, optional): Config of denoising - query generator. Defaults to `None`. - """ - - def __init__(self, *args, dn_cfg: OptConfigType = None, **kwargs) -> None: - super().__init__(*args, **kwargs) - assert self.as_two_stage, 'as_two_stage must be True for DINO' - assert self.with_box_refine, 'with_box_refine must be True for DINO' - - if dn_cfg is not None: - assert 'num_classes' not in dn_cfg and \ - 'num_queries' not in dn_cfg and \ - 'hidden_dim' not in dn_cfg, \ - 'The three keyword args `num_classes`, `embed_dims`, and ' \ - '`num_matching_queries` are set in `detector.__init__()`, ' \ - 'users should not set them in `dn_cfg` config.' - dn_cfg['num_classes'] = self.bbox_head.num_classes - dn_cfg['embed_dims'] = self.embed_dims - dn_cfg['num_matching_queries'] = self.num_queries - self.dn_query_generator = CdnQueryGenerator(**dn_cfg) - - def _init_layers(self) -> None: - """Initialize layers except for backbone, neck and bbox_head.""" - self.positional_encoding = SinePositionalEncoding( - **self.positional_encoding) - self.encoder = DeformableDetrTransformerEncoder(**self.encoder) - self.decoder = DinoTransformerDecoder(**self.decoder) - self.embed_dims = self.encoder.embed_dims - self.query_embedding = nn.Embedding(self.num_queries, self.embed_dims) - # NOTE In DINO, the query_embedding only contains content - # queries, while in Deformable DETR, the query_embedding - # contains both content and spatial queries, and in DETR, - # it only contains spatial queries. - - num_feats = self.positional_encoding.num_feats - assert num_feats * 2 == self.embed_dims, \ - f'embed_dims should be exactly 2 times of num_feats. ' \ - f'Found {self.embed_dims} and {num_feats}.' - - self.level_embed = nn.Parameter( - torch.Tensor(self.num_feature_levels, self.embed_dims)) - self.memory_trans_fc = nn.Linear(self.embed_dims, self.embed_dims) - self.memory_trans_norm = nn.LayerNorm(self.embed_dims) - - def init_weights(self) -> None: - """Initialize weights for Transformer and other components.""" - super(DeformableDETR, self).init_weights() - for coder in self.encoder, self.decoder: - for p in coder.parameters(): - if p.dim() > 1: - nn.init.xavier_uniform_(p) - for m in self.modules(): - if isinstance(m, MultiScaleDeformableAttention): - m.init_weights() - nn.init.xavier_uniform_(self.memory_trans_fc.weight) - nn.init.xavier_uniform_(self.query_embedding.weight) - normal_(self.level_embed) - - def forward_transformer( - self, - img_feats: Tuple[Tensor], - batch_data_samples: OptSampleList = None, - ) -> Dict: - """Forward process of Transformer. - - The forward procedure of the transformer is defined as: - 'pre_transformer' -> 'encoder' -> 'pre_decoder' -> 'decoder' - More details can be found at `TransformerDetector.forward_transformer` - in `mmdet/detector/base_detr.py`. - The difference is that the ground truth in `batch_data_samples` is - required for the `pre_decoder` to prepare the query of DINO. - Additionally, DINO inherits the `pre_transformer` method and the - `forward_encoder` method of DeformableDETR. More details about the - two methods can be found in `mmdet/detector/deformable_detr.py`. - - Args: - img_feats (tuple[Tensor]): Tuple of feature maps from neck. Each - feature map has shape (bs, dim, H, W). - batch_data_samples (list[:obj:`DetDataSample`]): The batch - data samples. It usually includes information such - as `gt_instance` or `gt_panoptic_seg` or `gt_sem_seg`. - Defaults to None. - - Returns: - dict: The dictionary of bbox_head function inputs, which always - includes the `hidden_states` of the decoder output and may contain - `references` including the initial and intermediate references. - """ - encoder_inputs_dict, decoder_inputs_dict = self.pre_transformer( - img_feats, batch_data_samples) - - encoder_outputs_dict = self.forward_encoder(**encoder_inputs_dict) - - tmp_dec_in, head_inputs_dict = self.pre_decoder( - **encoder_outputs_dict, batch_data_samples=batch_data_samples) - decoder_inputs_dict.update(tmp_dec_in) - - decoder_outputs_dict = self.forward_decoder(**decoder_inputs_dict) - head_inputs_dict.update(decoder_outputs_dict) - return head_inputs_dict - - def pre_decoder( - self, - memory: Tensor, - memory_mask: Tensor, - spatial_shapes: Tensor, - batch_data_samples: OptSampleList = None, - ) -> Tuple[Dict]: - """Prepare intermediate variables before entering Transformer decoder, - such as `query`, `query_pos`, and `reference_points`. - - Args: - memory (Tensor): The output embeddings of the Transformer encoder, - has shape (bs, num_feat_points, dim). - memory_mask (Tensor): ByteTensor, the padding mask of the memory, - has shape (bs, num_feat_points). Will only be used when - `as_two_stage` is `True`. - spatial_shapes (Tensor): Spatial shapes of features in all levels. - With shape (num_levels, 2), last dimension represents (h, w). - Will only be used when `as_two_stage` is `True`. - batch_data_samples (list[:obj:`DetDataSample`]): The batch - data samples. It usually includes information such - as `gt_instance` or `gt_panoptic_seg` or `gt_sem_seg`. - Defaults to None. - - Returns: - tuple[dict]: The decoder_inputs_dict and head_inputs_dict. - - - decoder_inputs_dict (dict): The keyword dictionary args of - `self.forward_decoder()`, which includes 'query', 'memory', - `reference_points`, and `dn_mask`. The reference points of - decoder input here are 4D boxes, although it has `points` - in its name. - - head_inputs_dict (dict): The keyword dictionary args of the - bbox_head functions, which includes `topk_score`, `topk_coords`, - and `dn_meta` when `self.training` is `True`, else is empty. - """ - bs, _, c = memory.shape - cls_out_features = self.bbox_head.cls_branches[ - self.decoder.num_layers].out_features - - output_memory, output_proposals = self.gen_encoder_output_proposals( - memory, memory_mask, spatial_shapes) - enc_outputs_class = self.bbox_head.cls_branches[ - self.decoder.num_layers]( - output_memory) - enc_outputs_coord_unact = self.bbox_head.reg_branches[ - self.decoder.num_layers](output_memory) + output_proposals - - # NOTE The DINO selects top-k proposals according to scores of - # multi-class classification, while DeformDETR, where the input - # is `enc_outputs_class[..., 0]` selects according to scores of - # binary classification. - topk_indices = torch.topk( - enc_outputs_class.max(-1)[0], k=self.num_queries, dim=1)[1] - topk_score = torch.gather( - enc_outputs_class, 1, - topk_indices.unsqueeze(-1).repeat(1, 1, cls_out_features)) - topk_coords_unact = torch.gather( - enc_outputs_coord_unact, 1, - topk_indices.unsqueeze(-1).repeat(1, 1, 4)) - topk_coords = topk_coords_unact.sigmoid() - topk_coords_unact = topk_coords_unact.detach() - - query = self.query_embedding.weight[:, None, :] - query = query.repeat(1, bs, 1).transpose(0, 1) - if self.training: - dn_label_query, dn_bbox_query, dn_mask, dn_meta = \ - self.dn_query_generator(batch_data_samples) - query = torch.cat([dn_label_query, query], dim=1) - reference_points = torch.cat([dn_bbox_query, topk_coords_unact], - dim=1) - else: - reference_points = topk_coords_unact - dn_mask, dn_meta = None, None - reference_points = reference_points.sigmoid() - - decoder_inputs_dict = dict( - query=query, - memory=memory, - reference_points=reference_points, - dn_mask=dn_mask) - # NOTE DINO calculates encoder losses on scores and coordinates - # of selected top-k encoder queries, while DeformDETR is of all - # encoder queries. - head_inputs_dict = dict( - enc_outputs_class=topk_score, - enc_outputs_coord=topk_coords, - dn_meta=dn_meta) if self.training else dict() - return decoder_inputs_dict, head_inputs_dict - - def forward_decoder(self, - query: Tensor, - memory: Tensor, - memory_mask: Tensor, - reference_points: Tensor, - spatial_shapes: Tensor, - level_start_index: Tensor, - valid_ratios: Tensor, - dn_mask: Optional[Tensor] = None) -> Dict: - """Forward with Transformer decoder. - - The forward procedure of the transformer is defined as: - 'pre_transformer' -> 'encoder' -> 'pre_decoder' -> 'decoder' - More details can be found at `TransformerDetector.forward_transformer` - in `mmdet/detector/base_detr.py`. - - Args: - query (Tensor): The queries of decoder inputs, has shape - (bs, num_queries_total, dim), where `num_queries_total` is the - sum of `num_denoising_queries` and `num_matching_queries` when - `self.training` is `True`, else `num_matching_queries`. - memory (Tensor): The output embeddings of the Transformer encoder, - has shape (bs, num_feat_points, dim). - memory_mask (Tensor): ByteTensor, the padding mask of the memory, - has shape (bs, num_feat_points). - reference_points (Tensor): The initial reference, has shape - (bs, num_queries_total, 4) with the last dimension arranged as - (cx, cy, w, h). - spatial_shapes (Tensor): Spatial shapes of features in all levels, - has shape (num_levels, 2), last dimension represents (h, w). - level_start_index (Tensor): The start index of each level. - A tensor has shape (num_levels, ) and can be represented - as [0, h_0*w_0, h_0*w_0+h_1*w_1, ...]. - valid_ratios (Tensor): The ratios of the valid width and the valid - height relative to the width and the height of features in all - levels, has shape (bs, num_levels, 2). - dn_mask (Tensor, optional): The attention mask to prevent - information leakage from different denoising groups and - matching parts, will be used as `self_attn_mask` of the - `self.decoder`, has shape (num_queries_total, - num_queries_total). - It is `None` when `self.training` is `False`. - - Returns: - dict: The dictionary of decoder outputs, which includes the - `hidden_states` of the decoder output and `references` including - the initial and intermediate reference_points. - """ - inter_states, references = self.decoder( - query=query, - value=memory, - key_padding_mask=memory_mask, - self_attn_mask=dn_mask, - reference_points=reference_points, - spatial_shapes=spatial_shapes, - level_start_index=level_start_index, - valid_ratios=valid_ratios, - reg_branches=self.bbox_head.reg_branches) - - if len(query) == self.num_queries: - # NOTE: This is to make sure label_embeding can be involved to - # produce loss even if there is no denoising query (no ground truth - # target in this GPU), otherwise, this will raise runtime error in - # distributed training. - inter_states[0] += \ - self.dn_query_generator.label_embedding.weight[0, 0] * 0.0 - - decoder_outputs_dict = dict( - hidden_states=inter_states, references=list(references)) - return decoder_outputs_dict diff --git a/spaces/KyanChen/RSPrompter/mmdet/models/roi_heads/multi_instance_roi_head.py b/spaces/KyanChen/RSPrompter/mmdet/models/roi_heads/multi_instance_roi_head.py deleted file mode 100644 index fee55b0a5d341c03165649f59737fd34d85c207e..0000000000000000000000000000000000000000 --- a/spaces/KyanChen/RSPrompter/mmdet/models/roi_heads/multi_instance_roi_head.py +++ /dev/null @@ -1,226 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from typing import List, Tuple - -from torch import Tensor - -from mmdet.registry import MODELS -from mmdet.structures import DetDataSample -from mmdet.structures.bbox import bbox2roi -from mmdet.utils import ConfigType, InstanceList -from ..task_modules.samplers import SamplingResult -from ..utils import empty_instances, unpack_gt_instances -from .standard_roi_head import StandardRoIHead - - -@MODELS.register_module() -class MultiInstanceRoIHead(StandardRoIHead): - """The roi head for Multi-instance prediction.""" - - def __init__(self, num_instance: int = 2, *args, **kwargs) -> None: - self.num_instance = num_instance - super().__init__(*args, **kwargs) - - def init_bbox_head(self, bbox_roi_extractor: ConfigType, - bbox_head: ConfigType) -> None: - """Initialize box head and box roi extractor. - - Args: - bbox_roi_extractor (dict or ConfigDict): Config of box - roi extractor. - bbox_head (dict or ConfigDict): Config of box in box head. - """ - self.bbox_roi_extractor = MODELS.build(bbox_roi_extractor) - self.bbox_head = MODELS.build(bbox_head) - - def _bbox_forward(self, x: Tuple[Tensor], rois: Tensor) -> dict: - """Box head forward function used in both training and testing. - - Args: - x (tuple[Tensor]): List of multi-level img features. - rois (Tensor): RoIs with the shape (n, 5) where the first - column indicates batch id of each RoI. - - Returns: - dict[str, Tensor]: Usually returns a dictionary with keys: - - - `cls_score` (Tensor): Classification scores. - - `bbox_pred` (Tensor): Box energies / deltas. - - `cls_score_ref` (Tensor): The cls_score after refine model. - - `bbox_pred_ref` (Tensor): The bbox_pred after refine model. - - `bbox_feats` (Tensor): Extract bbox RoI features. - """ - # TODO: a more flexible way to decide which feature maps to use - bbox_feats = self.bbox_roi_extractor( - x[:self.bbox_roi_extractor.num_inputs], rois) - bbox_results = self.bbox_head(bbox_feats) - - if self.bbox_head.with_refine: - bbox_results = dict( - cls_score=bbox_results[0], - bbox_pred=bbox_results[1], - cls_score_ref=bbox_results[2], - bbox_pred_ref=bbox_results[3], - bbox_feats=bbox_feats) - else: - bbox_results = dict( - cls_score=bbox_results[0], - bbox_pred=bbox_results[1], - bbox_feats=bbox_feats) - - return bbox_results - - def bbox_loss(self, x: Tuple[Tensor], - sampling_results: List[SamplingResult]) -> dict: - """Perform forward propagation and loss calculation of the bbox head on - the features of the upstream network. - - Args: - x (tuple[Tensor]): List of multi-level img features. - sampling_results (list["obj:`SamplingResult`]): Sampling results. - - Returns: - dict[str, Tensor]: Usually returns a dictionary with keys: - - - `cls_score` (Tensor): Classification scores. - - `bbox_pred` (Tensor): Box energies / deltas. - - `bbox_feats` (Tensor): Extract bbox RoI features. - - `loss_bbox` (dict): A dictionary of bbox loss components. - """ - rois = bbox2roi([res.priors for res in sampling_results]) - bbox_results = self._bbox_forward(x, rois) - - # If there is a refining process, add refine loss. - if 'cls_score_ref' in bbox_results: - bbox_loss_and_target = self.bbox_head.loss_and_target( - cls_score=bbox_results['cls_score'], - bbox_pred=bbox_results['bbox_pred'], - rois=rois, - sampling_results=sampling_results, - rcnn_train_cfg=self.train_cfg) - bbox_results.update(loss_bbox=bbox_loss_and_target['loss_bbox']) - bbox_loss_and_target_ref = self.bbox_head.loss_and_target( - cls_score=bbox_results['cls_score_ref'], - bbox_pred=bbox_results['bbox_pred_ref'], - rois=rois, - sampling_results=sampling_results, - rcnn_train_cfg=self.train_cfg) - bbox_results['loss_bbox']['loss_rcnn_emd_ref'] = \ - bbox_loss_and_target_ref['loss_bbox']['loss_rcnn_emd'] - else: - bbox_loss_and_target = self.bbox_head.loss_and_target( - cls_score=bbox_results['cls_score'], - bbox_pred=bbox_results['bbox_pred'], - rois=rois, - sampling_results=sampling_results, - rcnn_train_cfg=self.train_cfg) - bbox_results.update(loss_bbox=bbox_loss_and_target['loss_bbox']) - - return bbox_results - - def loss(self, x: Tuple[Tensor], rpn_results_list: InstanceList, - batch_data_samples: List[DetDataSample]) -> dict: - """Perform forward propagation and loss calculation of the detection - roi on the features of the upstream network. - - Args: - x (tuple[Tensor]): List of multi-level img features. - rpn_results_list (list[:obj:`InstanceData`]): List of region - proposals. - batch_data_samples (list[:obj:`DetDataSample`]): The batch - data samples. It usually includes information such - as `gt_instance` or `gt_panoptic_seg` or `gt_sem_seg`. - - Returns: - dict[str, Tensor]: A dictionary of loss components - """ - assert len(rpn_results_list) == len(batch_data_samples) - outputs = unpack_gt_instances(batch_data_samples) - batch_gt_instances, batch_gt_instances_ignore, _ = outputs - - sampling_results = [] - for i in range(len(batch_data_samples)): - # rename rpn_results.bboxes to rpn_results.priors - rpn_results = rpn_results_list[i] - rpn_results.priors = rpn_results.pop('bboxes') - - assign_result = self.bbox_assigner.assign( - rpn_results, batch_gt_instances[i], - batch_gt_instances_ignore[i]) - sampling_result = self.bbox_sampler.sample( - assign_result, - rpn_results, - batch_gt_instances[i], - batch_gt_instances_ignore=batch_gt_instances_ignore[i]) - sampling_results.append(sampling_result) - - losses = dict() - # bbox head loss - if self.with_bbox: - bbox_results = self.bbox_loss(x, sampling_results) - losses.update(bbox_results['loss_bbox']) - - return losses - - def predict_bbox(self, - x: Tuple[Tensor], - batch_img_metas: List[dict], - rpn_results_list: InstanceList, - rcnn_test_cfg: ConfigType, - rescale: bool = False) -> InstanceList: - """Perform forward propagation of the bbox head and predict detection - results on the features of the upstream network. - - Args: - x (tuple[Tensor]): Feature maps of all scale level. - batch_img_metas (list[dict]): List of image information. - rpn_results_list (list[:obj:`InstanceData`]): List of region - proposals. - rcnn_test_cfg (obj:`ConfigDict`): `test_cfg` of R-CNN. - rescale (bool): If True, return boxes in original image space. - Defaults to False. - - Returns: - list[:obj:`InstanceData`]: Detection results of each image - after the post process. - Each item usually contains following keys. - - - scores (Tensor): Classification scores, has a shape - (num_instance, ) - - labels (Tensor): Labels of bboxes, has a shape - (num_instances, ). - - bboxes (Tensor): Has a shape (num_instances, 4), - the last dimension 4 arrange as (x1, y1, x2, y2). - """ - proposals = [res.bboxes for res in rpn_results_list] - rois = bbox2roi(proposals) - - if rois.shape[0] == 0: - return empty_instances( - batch_img_metas, rois.device, task_type='bbox') - - bbox_results = self._bbox_forward(x, rois) - - # split batch bbox prediction back to each image - if 'cls_score_ref' in bbox_results: - cls_scores = bbox_results['cls_score_ref'] - bbox_preds = bbox_results['bbox_pred_ref'] - else: - cls_scores = bbox_results['cls_score'] - bbox_preds = bbox_results['bbox_pred'] - num_proposals_per_img = tuple(len(p) for p in proposals) - rois = rois.split(num_proposals_per_img, 0) - cls_scores = cls_scores.split(num_proposals_per_img, 0) - - if bbox_preds is not None: - bbox_preds = bbox_preds.split(num_proposals_per_img, 0) - else: - bbox_preds = (None, ) * len(proposals) - - result_list = self.bbox_head.predict_by_feat( - rois=rois, - cls_scores=cls_scores, - bbox_preds=bbox_preds, - batch_img_metas=batch_img_metas, - rcnn_test_cfg=rcnn_test_cfg, - rescale=rescale) - return result_list diff --git a/spaces/KyanChen/RSPrompter/mmdet/models/task_modules/coders/bucketing_bbox_coder.py b/spaces/KyanChen/RSPrompter/mmdet/models/task_modules/coders/bucketing_bbox_coder.py deleted file mode 100644 index 4044e1cd91d619521606f3c03032a40a9fc27130..0000000000000000000000000000000000000000 --- a/spaces/KyanChen/RSPrompter/mmdet/models/task_modules/coders/bucketing_bbox_coder.py +++ /dev/null @@ -1,366 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from typing import Optional, Sequence, Tuple, Union - -import numpy as np -import torch -import torch.nn.functional as F -from torch import Tensor - -from mmdet.registry import TASK_UTILS -from mmdet.structures.bbox import (BaseBoxes, HorizontalBoxes, bbox_rescale, - get_box_tensor) -from .base_bbox_coder import BaseBBoxCoder - - -@TASK_UTILS.register_module() -class BucketingBBoxCoder(BaseBBoxCoder): - """Bucketing BBox Coder for Side-Aware Boundary Localization (SABL). - - Boundary Localization with Bucketing and Bucketing Guided Rescoring - are implemented here. - - Please refer to https://arxiv.org/abs/1912.04260 for more details. - - Args: - num_buckets (int): Number of buckets. - scale_factor (int): Scale factor of proposals to generate buckets. - offset_topk (int): Topk buckets are used to generate - bucket fine regression targets. Defaults to 2. - offset_upperbound (float): Offset upperbound to generate - bucket fine regression targets. - To avoid too large offset displacements. Defaults to 1.0. - cls_ignore_neighbor (bool): Ignore second nearest bucket or Not. - Defaults to True. - clip_border (bool, optional): Whether clip the objects outside the - border of the image. Defaults to True. - """ - - def __init__(self, - num_buckets: int, - scale_factor: int, - offset_topk: int = 2, - offset_upperbound: float = 1.0, - cls_ignore_neighbor: bool = True, - clip_border: bool = True, - **kwargs) -> None: - super().__init__(**kwargs) - self.num_buckets = num_buckets - self.scale_factor = scale_factor - self.offset_topk = offset_topk - self.offset_upperbound = offset_upperbound - self.cls_ignore_neighbor = cls_ignore_neighbor - self.clip_border = clip_border - - def encode(self, bboxes: Union[Tensor, BaseBoxes], - gt_bboxes: Union[Tensor, BaseBoxes]) -> Tuple[Tensor]: - """Get bucketing estimation and fine regression targets during - training. - - Args: - bboxes (torch.Tensor or :obj:`BaseBoxes`): source boxes, - e.g., object proposals. - gt_bboxes (torch.Tensor or :obj:`BaseBoxes`): target of the - transformation, e.g., ground truth boxes. - - Returns: - encoded_bboxes(tuple[Tensor]): bucketing estimation - and fine regression targets and weights - """ - bboxes = get_box_tensor(bboxes) - gt_bboxes = get_box_tensor(gt_bboxes) - assert bboxes.size(0) == gt_bboxes.size(0) - assert bboxes.size(-1) == gt_bboxes.size(-1) == 4 - encoded_bboxes = bbox2bucket(bboxes, gt_bboxes, self.num_buckets, - self.scale_factor, self.offset_topk, - self.offset_upperbound, - self.cls_ignore_neighbor) - return encoded_bboxes - - def decode( - self, - bboxes: Union[Tensor, BaseBoxes], - pred_bboxes: Tensor, - max_shape: Optional[Tuple[int]] = None - ) -> Tuple[Union[Tensor, BaseBoxes], Tensor]: - """Apply transformation `pred_bboxes` to `boxes`. - Args: - boxes (torch.Tensor or :obj:`BaseBoxes`): Basic boxes. - pred_bboxes (torch.Tensor): Predictions for bucketing estimation - and fine regression - max_shape (tuple[int], optional): Maximum shape of boxes. - Defaults to None. - - Returns: - Union[torch.Tensor, :obj:`BaseBoxes`]: Decoded boxes. - """ - bboxes = get_box_tensor(bboxes) - assert len(pred_bboxes) == 2 - cls_preds, offset_preds = pred_bboxes - assert cls_preds.size(0) == bboxes.size(0) and offset_preds.size( - 0) == bboxes.size(0) - bboxes, loc_confidence = bucket2bbox(bboxes, cls_preds, offset_preds, - self.num_buckets, - self.scale_factor, max_shape, - self.clip_border) - if self.use_box_type: - bboxes = HorizontalBoxes(bboxes, clone=False) - return bboxes, loc_confidence - - -def generat_buckets(proposals: Tensor, - num_buckets: int, - scale_factor: float = 1.0) -> Tuple[Tensor]: - """Generate buckets w.r.t bucket number and scale factor of proposals. - - Args: - proposals (Tensor): Shape (n, 4) - num_buckets (int): Number of buckets. - scale_factor (float): Scale factor to rescale proposals. - - Returns: - tuple[Tensor]: (bucket_w, bucket_h, l_buckets, r_buckets, - t_buckets, d_buckets) - - - bucket_w: Width of buckets on x-axis. Shape (n, ). - - bucket_h: Height of buckets on y-axis. Shape (n, ). - - l_buckets: Left buckets. Shape (n, ceil(side_num/2)). - - r_buckets: Right buckets. Shape (n, ceil(side_num/2)). - - t_buckets: Top buckets. Shape (n, ceil(side_num/2)). - - d_buckets: Down buckets. Shape (n, ceil(side_num/2)). - """ - proposals = bbox_rescale(proposals, scale_factor) - - # number of buckets in each side - side_num = int(np.ceil(num_buckets / 2.0)) - pw = proposals[..., 2] - proposals[..., 0] - ph = proposals[..., 3] - proposals[..., 1] - px1 = proposals[..., 0] - py1 = proposals[..., 1] - px2 = proposals[..., 2] - py2 = proposals[..., 3] - - bucket_w = pw / num_buckets - bucket_h = ph / num_buckets - - # left buckets - l_buckets = px1[:, None] + (0.5 + torch.arange( - 0, side_num).to(proposals).float())[None, :] * bucket_w[:, None] - # right buckets - r_buckets = px2[:, None] - (0.5 + torch.arange( - 0, side_num).to(proposals).float())[None, :] * bucket_w[:, None] - # top buckets - t_buckets = py1[:, None] + (0.5 + torch.arange( - 0, side_num).to(proposals).float())[None, :] * bucket_h[:, None] - # down buckets - d_buckets = py2[:, None] - (0.5 + torch.arange( - 0, side_num).to(proposals).float())[None, :] * bucket_h[:, None] - return bucket_w, bucket_h, l_buckets, r_buckets, t_buckets, d_buckets - - -def bbox2bucket(proposals: Tensor, - gt: Tensor, - num_buckets: int, - scale_factor: float, - offset_topk: int = 2, - offset_upperbound: float = 1.0, - cls_ignore_neighbor: bool = True) -> Tuple[Tensor]: - """Generate buckets estimation and fine regression targets. - - Args: - proposals (Tensor): Shape (n, 4) - gt (Tensor): Shape (n, 4) - num_buckets (int): Number of buckets. - scale_factor (float): Scale factor to rescale proposals. - offset_topk (int): Topk buckets are used to generate - bucket fine regression targets. Defaults to 2. - offset_upperbound (float): Offset allowance to generate - bucket fine regression targets. - To avoid too large offset displacements. Defaults to 1.0. - cls_ignore_neighbor (bool): Ignore second nearest bucket or Not. - Defaults to True. - - Returns: - tuple[Tensor]: (offsets, offsets_weights, bucket_labels, cls_weights). - - - offsets: Fine regression targets. \ - Shape (n, num_buckets*2). - - offsets_weights: Fine regression weights. \ - Shape (n, num_buckets*2). - - bucket_labels: Bucketing estimation labels. \ - Shape (n, num_buckets*2). - - cls_weights: Bucketing estimation weights. \ - Shape (n, num_buckets*2). - """ - assert proposals.size() == gt.size() - - # generate buckets - proposals = proposals.float() - gt = gt.float() - (bucket_w, bucket_h, l_buckets, r_buckets, t_buckets, - d_buckets) = generat_buckets(proposals, num_buckets, scale_factor) - - gx1 = gt[..., 0] - gy1 = gt[..., 1] - gx2 = gt[..., 2] - gy2 = gt[..., 3] - - # generate offset targets and weights - # offsets from buckets to gts - l_offsets = (l_buckets - gx1[:, None]) / bucket_w[:, None] - r_offsets = (r_buckets - gx2[:, None]) / bucket_w[:, None] - t_offsets = (t_buckets - gy1[:, None]) / bucket_h[:, None] - d_offsets = (d_buckets - gy2[:, None]) / bucket_h[:, None] - - # select top-k nearest buckets - l_topk, l_label = l_offsets.abs().topk( - offset_topk, dim=1, largest=False, sorted=True) - r_topk, r_label = r_offsets.abs().topk( - offset_topk, dim=1, largest=False, sorted=True) - t_topk, t_label = t_offsets.abs().topk( - offset_topk, dim=1, largest=False, sorted=True) - d_topk, d_label = d_offsets.abs().topk( - offset_topk, dim=1, largest=False, sorted=True) - - offset_l_weights = l_offsets.new_zeros(l_offsets.size()) - offset_r_weights = r_offsets.new_zeros(r_offsets.size()) - offset_t_weights = t_offsets.new_zeros(t_offsets.size()) - offset_d_weights = d_offsets.new_zeros(d_offsets.size()) - inds = torch.arange(0, proposals.size(0)).to(proposals).long() - - # generate offset weights of top-k nearest buckets - for k in range(offset_topk): - if k >= 1: - offset_l_weights[inds, l_label[:, - k]] = (l_topk[:, k] < - offset_upperbound).float() - offset_r_weights[inds, r_label[:, - k]] = (r_topk[:, k] < - offset_upperbound).float() - offset_t_weights[inds, t_label[:, - k]] = (t_topk[:, k] < - offset_upperbound).float() - offset_d_weights[inds, d_label[:, - k]] = (d_topk[:, k] < - offset_upperbound).float() - else: - offset_l_weights[inds, l_label[:, k]] = 1.0 - offset_r_weights[inds, r_label[:, k]] = 1.0 - offset_t_weights[inds, t_label[:, k]] = 1.0 - offset_d_weights[inds, d_label[:, k]] = 1.0 - - offsets = torch.cat([l_offsets, r_offsets, t_offsets, d_offsets], dim=-1) - offsets_weights = torch.cat([ - offset_l_weights, offset_r_weights, offset_t_weights, offset_d_weights - ], - dim=-1) - - # generate bucket labels and weight - side_num = int(np.ceil(num_buckets / 2.0)) - labels = torch.stack( - [l_label[:, 0], r_label[:, 0], t_label[:, 0], d_label[:, 0]], dim=-1) - - batch_size = labels.size(0) - bucket_labels = F.one_hot(labels.view(-1), side_num).view(batch_size, - -1).float() - bucket_cls_l_weights = (l_offsets.abs() < 1).float() - bucket_cls_r_weights = (r_offsets.abs() < 1).float() - bucket_cls_t_weights = (t_offsets.abs() < 1).float() - bucket_cls_d_weights = (d_offsets.abs() < 1).float() - bucket_cls_weights = torch.cat([ - bucket_cls_l_weights, bucket_cls_r_weights, bucket_cls_t_weights, - bucket_cls_d_weights - ], - dim=-1) - # ignore second nearest buckets for cls if necessary - if cls_ignore_neighbor: - bucket_cls_weights = (~((bucket_cls_weights == 1) & - (bucket_labels == 0))).float() - else: - bucket_cls_weights[:] = 1.0 - return offsets, offsets_weights, bucket_labels, bucket_cls_weights - - -def bucket2bbox(proposals: Tensor, - cls_preds: Tensor, - offset_preds: Tensor, - num_buckets: int, - scale_factor: float = 1.0, - max_shape: Optional[Union[Sequence[int], Tensor, - Sequence[Sequence[int]]]] = None, - clip_border: bool = True) -> Tuple[Tensor]: - """Apply bucketing estimation (cls preds) and fine regression (offset - preds) to generate det bboxes. - - Args: - proposals (Tensor): Boxes to be transformed. Shape (n, 4) - cls_preds (Tensor): bucketing estimation. Shape (n, num_buckets*2). - offset_preds (Tensor): fine regression. Shape (n, num_buckets*2). - num_buckets (int): Number of buckets. - scale_factor (float): Scale factor to rescale proposals. - max_shape (tuple[int, int]): Maximum bounds for boxes. specifies (H, W) - clip_border (bool, optional): Whether clip the objects outside the - border of the image. Defaults to True. - - Returns: - tuple[Tensor]: (bboxes, loc_confidence). - - - bboxes: predicted bboxes. Shape (n, 4) - - loc_confidence: localization confidence of predicted bboxes. - Shape (n,). - """ - - side_num = int(np.ceil(num_buckets / 2.0)) - cls_preds = cls_preds.view(-1, side_num) - offset_preds = offset_preds.view(-1, side_num) - - scores = F.softmax(cls_preds, dim=1) - score_topk, score_label = scores.topk(2, dim=1, largest=True, sorted=True) - - rescaled_proposals = bbox_rescale(proposals, scale_factor) - - pw = rescaled_proposals[..., 2] - rescaled_proposals[..., 0] - ph = rescaled_proposals[..., 3] - rescaled_proposals[..., 1] - px1 = rescaled_proposals[..., 0] - py1 = rescaled_proposals[..., 1] - px2 = rescaled_proposals[..., 2] - py2 = rescaled_proposals[..., 3] - - bucket_w = pw / num_buckets - bucket_h = ph / num_buckets - - score_inds_l = score_label[0::4, 0] - score_inds_r = score_label[1::4, 0] - score_inds_t = score_label[2::4, 0] - score_inds_d = score_label[3::4, 0] - l_buckets = px1 + (0.5 + score_inds_l.float()) * bucket_w - r_buckets = px2 - (0.5 + score_inds_r.float()) * bucket_w - t_buckets = py1 + (0.5 + score_inds_t.float()) * bucket_h - d_buckets = py2 - (0.5 + score_inds_d.float()) * bucket_h - - offsets = offset_preds.view(-1, 4, side_num) - inds = torch.arange(proposals.size(0)).to(proposals).long() - l_offsets = offsets[:, 0, :][inds, score_inds_l] - r_offsets = offsets[:, 1, :][inds, score_inds_r] - t_offsets = offsets[:, 2, :][inds, score_inds_t] - d_offsets = offsets[:, 3, :][inds, score_inds_d] - - x1 = l_buckets - l_offsets * bucket_w - x2 = r_buckets - r_offsets * bucket_w - y1 = t_buckets - t_offsets * bucket_h - y2 = d_buckets - d_offsets * bucket_h - - if clip_border and max_shape is not None: - x1 = x1.clamp(min=0, max=max_shape[1] - 1) - y1 = y1.clamp(min=0, max=max_shape[0] - 1) - x2 = x2.clamp(min=0, max=max_shape[1] - 1) - y2 = y2.clamp(min=0, max=max_shape[0] - 1) - bboxes = torch.cat([x1[:, None], y1[:, None], x2[:, None], y2[:, None]], - dim=-1) - - # bucketing guided rescoring - loc_confidence = score_topk[:, 0] - top2_neighbor_inds = (score_label[:, 0] - score_label[:, 1]).abs() == 1 - loc_confidence += score_topk[:, 1] * top2_neighbor_inds.float() - loc_confidence = loc_confidence.view(-1, 4).mean(dim=1) - - return bboxes, loc_confidence diff --git a/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/lib/uvr5_pack/lib_v5/spec_utils.py b/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/lib/uvr5_pack/lib_v5/spec_utils.py deleted file mode 100644 index 062d9050d85c036f8ebafc9c64f1501cff747568..0000000000000000000000000000000000000000 --- a/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/lib/uvr5_pack/lib_v5/spec_utils.py +++ /dev/null @@ -1,666 +0,0 @@ -import os, librosa -import numpy as np -import soundfile as sf -from tqdm import tqdm -import json, math, hashlib - - -def crop_center(h1, h2): - h1_shape = h1.size() - h2_shape = h2.size() - - if h1_shape[3] == h2_shape[3]: - return h1 - elif h1_shape[3] < h2_shape[3]: - raise ValueError("h1_shape[3] must be greater than h2_shape[3]") - - # s_freq = (h2_shape[2] - h1_shape[2]) // 2 - # e_freq = s_freq + h1_shape[2] - s_time = (h1_shape[3] - h2_shape[3]) // 2 - e_time = s_time + h2_shape[3] - h1 = h1[:, :, :, s_time:e_time] - - return h1 - - -def wave_to_spectrogram( - wave, hop_length, n_fft, mid_side=False, mid_side_b2=False, reverse=False -): - if reverse: - wave_left = np.flip(np.asfortranarray(wave[0])) - wave_right = np.flip(np.asfortranarray(wave[1])) - elif mid_side: - wave_left = np.asfortranarray(np.add(wave[0], wave[1]) / 2) - wave_right = np.asfortranarray(np.subtract(wave[0], wave[1])) - elif mid_side_b2: - wave_left = np.asfortranarray(np.add(wave[1], wave[0] * 0.5)) - wave_right = np.asfortranarray(np.subtract(wave[0], wave[1] * 0.5)) - else: - wave_left = np.asfortranarray(wave[0]) - wave_right = np.asfortranarray(wave[1]) - - spec_left = librosa.stft(wave_left, n_fft, hop_length=hop_length) - spec_right = librosa.stft(wave_right, n_fft, hop_length=hop_length) - - spec = np.asfortranarray([spec_left, spec_right]) - - return spec - - -def wave_to_spectrogram_mt( - wave, hop_length, n_fft, mid_side=False, mid_side_b2=False, reverse=False -): - import threading - - if reverse: - wave_left = np.flip(np.asfortranarray(wave[0])) - wave_right = np.flip(np.asfortranarray(wave[1])) - elif mid_side: - wave_left = np.asfortranarray(np.add(wave[0], wave[1]) / 2) - wave_right = np.asfortranarray(np.subtract(wave[0], wave[1])) - elif mid_side_b2: - wave_left = np.asfortranarray(np.add(wave[1], wave[0] * 0.5)) - wave_right = np.asfortranarray(np.subtract(wave[0], wave[1] * 0.5)) - else: - wave_left = np.asfortranarray(wave[0]) - wave_right = np.asfortranarray(wave[1]) - - def run_thread(**kwargs): - global spec_left - spec_left = librosa.stft(**kwargs) - - thread = threading.Thread( - target=run_thread, - kwargs={"y": wave_left, "n_fft": n_fft, "hop_length": hop_length}, - ) - thread.start() - spec_right = librosa.stft(wave_right, n_fft, hop_length=hop_length) - thread.join() - - spec = np.asfortranarray([spec_left, spec_right]) - - return spec - - -def combine_spectrograms(specs, mp): - l = min([specs[i].shape[2] for i in specs]) - spec_c = np.zeros(shape=(2, mp.param["bins"] + 1, l), dtype=np.complex64) - offset = 0 - bands_n = len(mp.param["band"]) - - for d in range(1, bands_n + 1): - h = mp.param["band"][d]["crop_stop"] - mp.param["band"][d]["crop_start"] - spec_c[:, offset : offset + h, :l] = specs[d][ - :, mp.param["band"][d]["crop_start"] : mp.param["band"][d]["crop_stop"], :l - ] - offset += h - - if offset > mp.param["bins"]: - raise ValueError("Too much bins") - - # lowpass fiter - if ( - mp.param["pre_filter_start"] > 0 - ): # and mp.param['band'][bands_n]['res_type'] in ['scipy', 'polyphase']: - if bands_n == 1: - spec_c = fft_lp_filter( - spec_c, mp.param["pre_filter_start"], mp.param["pre_filter_stop"] - ) - else: - gp = 1 - for b in range( - mp.param["pre_filter_start"] + 1, mp.param["pre_filter_stop"] - ): - g = math.pow( - 10, -(b - mp.param["pre_filter_start"]) * (3.5 - gp) / 20.0 - ) - gp = g - spec_c[:, b, :] *= g - - return np.asfortranarray(spec_c) - - -def spectrogram_to_image(spec, mode="magnitude"): - if mode == "magnitude": - if np.iscomplexobj(spec): - y = np.abs(spec) - else: - y = spec - y = np.log10(y**2 + 1e-8) - elif mode == "phase": - if np.iscomplexobj(spec): - y = np.angle(spec) - else: - y = spec - - y -= y.min() - y *= 255 / y.max() - img = np.uint8(y) - - if y.ndim == 3: - img = img.transpose(1, 2, 0) - img = np.concatenate([np.max(img, axis=2, keepdims=True), img], axis=2) - - return img - - -def reduce_vocal_aggressively(X, y, softmask): - v = X - y - y_mag_tmp = np.abs(y) - v_mag_tmp = np.abs(v) - - v_mask = v_mag_tmp > y_mag_tmp - y_mag = np.clip(y_mag_tmp - v_mag_tmp * v_mask * softmask, 0, np.inf) - - return y_mag * np.exp(1.0j * np.angle(y)) - - -def mask_silence(mag, ref, thres=0.2, min_range=64, fade_size=32): - if min_range < fade_size * 2: - raise ValueError("min_range must be >= fade_area * 2") - - mag = mag.copy() - - idx = np.where(ref.mean(axis=(0, 1)) < thres)[0] - starts = np.insert(idx[np.where(np.diff(idx) != 1)[0] + 1], 0, idx[0]) - ends = np.append(idx[np.where(np.diff(idx) != 1)[0]], idx[-1]) - uninformative = np.where(ends - starts > min_range)[0] - if len(uninformative) > 0: - starts = starts[uninformative] - ends = ends[uninformative] - old_e = None - for s, e in zip(starts, ends): - if old_e is not None and s - old_e < fade_size: - s = old_e - fade_size * 2 - - if s != 0: - weight = np.linspace(0, 1, fade_size) - mag[:, :, s : s + fade_size] += weight * ref[:, :, s : s + fade_size] - else: - s -= fade_size - - if e != mag.shape[2]: - weight = np.linspace(1, 0, fade_size) - mag[:, :, e - fade_size : e] += weight * ref[:, :, e - fade_size : e] - else: - e += fade_size - - mag[:, :, s + fade_size : e - fade_size] += ref[ - :, :, s + fade_size : e - fade_size - ] - old_e = e - - return mag - - -def align_wave_head_and_tail(a, b): - l = min([a[0].size, b[0].size]) - - return a[:l, :l], b[:l, :l] - - -def cache_or_load(mix_path, inst_path, mp): - mix_basename = os.path.splitext(os.path.basename(mix_path))[0] - inst_basename = os.path.splitext(os.path.basename(inst_path))[0] - - cache_dir = "mph{}".format( - hashlib.sha1(json.dumps(mp.param, sort_keys=True).encode("utf-8")).hexdigest() - ) - mix_cache_dir = os.path.join("cache", cache_dir) - inst_cache_dir = os.path.join("cache", cache_dir) - - os.makedirs(mix_cache_dir, exist_ok=True) - os.makedirs(inst_cache_dir, exist_ok=True) - - mix_cache_path = os.path.join(mix_cache_dir, mix_basename + ".npy") - inst_cache_path = os.path.join(inst_cache_dir, inst_basename + ".npy") - - if os.path.exists(mix_cache_path) and os.path.exists(inst_cache_path): - X_spec_m = np.load(mix_cache_path) - y_spec_m = np.load(inst_cache_path) - else: - X_wave, y_wave, X_spec_s, y_spec_s = {}, {}, {}, {} - - for d in range(len(mp.param["band"]), 0, -1): - bp = mp.param["band"][d] - - if d == len(mp.param["band"]): # high-end band - X_wave[d], _ = librosa.load( - mix_path, bp["sr"], False, dtype=np.float32, res_type=bp["res_type"] - ) - y_wave[d], _ = librosa.load( - inst_path, - bp["sr"], - False, - dtype=np.float32, - res_type=bp["res_type"], - ) - else: # lower bands - X_wave[d] = librosa.resample( - X_wave[d + 1], - mp.param["band"][d + 1]["sr"], - bp["sr"], - res_type=bp["res_type"], - ) - y_wave[d] = librosa.resample( - y_wave[d + 1], - mp.param["band"][d + 1]["sr"], - bp["sr"], - res_type=bp["res_type"], - ) - - X_wave[d], y_wave[d] = align_wave_head_and_tail(X_wave[d], y_wave[d]) - - X_spec_s[d] = wave_to_spectrogram( - X_wave[d], - bp["hl"], - bp["n_fft"], - mp.param["mid_side"], - mp.param["mid_side_b2"], - mp.param["reverse"], - ) - y_spec_s[d] = wave_to_spectrogram( - y_wave[d], - bp["hl"], - bp["n_fft"], - mp.param["mid_side"], - mp.param["mid_side_b2"], - mp.param["reverse"], - ) - - del X_wave, y_wave - - X_spec_m = combine_spectrograms(X_spec_s, mp) - y_spec_m = combine_spectrograms(y_spec_s, mp) - - if X_spec_m.shape != y_spec_m.shape: - raise ValueError("The combined spectrograms are different: " + mix_path) - - _, ext = os.path.splitext(mix_path) - - np.save(mix_cache_path, X_spec_m) - np.save(inst_cache_path, y_spec_m) - - return X_spec_m, y_spec_m - - -def spectrogram_to_wave(spec, hop_length, mid_side, mid_side_b2, reverse): - spec_left = np.asfortranarray(spec[0]) - spec_right = np.asfortranarray(spec[1]) - - wave_left = librosa.istft(spec_left, hop_length=hop_length) - wave_right = librosa.istft(spec_right, hop_length=hop_length) - - if reverse: - return np.asfortranarray([np.flip(wave_left), np.flip(wave_right)]) - elif mid_side: - return np.asfortranarray( - [np.add(wave_left, wave_right / 2), np.subtract(wave_left, wave_right / 2)] - ) - elif mid_side_b2: - return np.asfortranarray( - [ - np.add(wave_right / 1.25, 0.4 * wave_left), - np.subtract(wave_left / 1.25, 0.4 * wave_right), - ] - ) - else: - return np.asfortranarray([wave_left, wave_right]) - - -def spectrogram_to_wave_mt(spec, hop_length, mid_side, reverse, mid_side_b2): - import threading - - spec_left = np.asfortranarray(spec[0]) - spec_right = np.asfortranarray(spec[1]) - - def run_thread(**kwargs): - global wave_left - wave_left = librosa.istft(**kwargs) - - thread = threading.Thread( - target=run_thread, kwargs={"stft_matrix": spec_left, "hop_length": hop_length} - ) - thread.start() - wave_right = librosa.istft(spec_right, hop_length=hop_length) - thread.join() - - if reverse: - return np.asfortranarray([np.flip(wave_left), np.flip(wave_right)]) - elif mid_side: - return np.asfortranarray( - [np.add(wave_left, wave_right / 2), np.subtract(wave_left, wave_right / 2)] - ) - elif mid_side_b2: - return np.asfortranarray( - [ - np.add(wave_right / 1.25, 0.4 * wave_left), - np.subtract(wave_left / 1.25, 0.4 * wave_right), - ] - ) - else: - return np.asfortranarray([wave_left, wave_right]) - - -def cmb_spectrogram_to_wave(spec_m, mp, extra_bins_h=None, extra_bins=None): - wave_band = {} - bands_n = len(mp.param["band"]) - offset = 0 - - for d in range(1, bands_n + 1): - bp = mp.param["band"][d] - spec_s = np.ndarray( - shape=(2, bp["n_fft"] // 2 + 1, spec_m.shape[2]), dtype=complex - ) - h = bp["crop_stop"] - bp["crop_start"] - spec_s[:, bp["crop_start"] : bp["crop_stop"], :] = spec_m[ - :, offset : offset + h, : - ] - - offset += h - if d == bands_n: # higher - if extra_bins_h: # if --high_end_process bypass - max_bin = bp["n_fft"] // 2 - spec_s[:, max_bin - extra_bins_h : max_bin, :] = extra_bins[ - :, :extra_bins_h, : - ] - if bp["hpf_start"] > 0: - spec_s = fft_hp_filter(spec_s, bp["hpf_start"], bp["hpf_stop"] - 1) - if bands_n == 1: - wave = spectrogram_to_wave( - spec_s, - bp["hl"], - mp.param["mid_side"], - mp.param["mid_side_b2"], - mp.param["reverse"], - ) - else: - wave = np.add( - wave, - spectrogram_to_wave( - spec_s, - bp["hl"], - mp.param["mid_side"], - mp.param["mid_side_b2"], - mp.param["reverse"], - ), - ) - else: - sr = mp.param["band"][d + 1]["sr"] - if d == 1: # lower - spec_s = fft_lp_filter(spec_s, bp["lpf_start"], bp["lpf_stop"]) - wave = librosa.resample( - spectrogram_to_wave( - spec_s, - bp["hl"], - mp.param["mid_side"], - mp.param["mid_side_b2"], - mp.param["reverse"], - ), - bp["sr"], - sr, - res_type="sinc_fastest", - ) - else: # mid - spec_s = fft_hp_filter(spec_s, bp["hpf_start"], bp["hpf_stop"] - 1) - spec_s = fft_lp_filter(spec_s, bp["lpf_start"], bp["lpf_stop"]) - wave2 = np.add( - wave, - spectrogram_to_wave( - spec_s, - bp["hl"], - mp.param["mid_side"], - mp.param["mid_side_b2"], - mp.param["reverse"], - ), - ) - # wave = librosa.core.resample(wave2, bp['sr'], sr, res_type="sinc_fastest") - wave = librosa.core.resample(wave2, bp["sr"], sr, res_type="scipy") - - return wave.T - - -def fft_lp_filter(spec, bin_start, bin_stop): - g = 1.0 - for b in range(bin_start, bin_stop): - g -= 1 / (bin_stop - bin_start) - spec[:, b, :] = g * spec[:, b, :] - - spec[:, bin_stop:, :] *= 0 - - return spec - - -def fft_hp_filter(spec, bin_start, bin_stop): - g = 1.0 - for b in range(bin_start, bin_stop, -1): - g -= 1 / (bin_start - bin_stop) - spec[:, b, :] = g * spec[:, b, :] - - spec[:, 0 : bin_stop + 1, :] *= 0 - - return spec - - -def mirroring(a, spec_m, input_high_end, mp): - if "mirroring" == a: - mirror = np.flip( - np.abs( - spec_m[ - :, - mp.param["pre_filter_start"] - - 10 - - input_high_end.shape[1] : mp.param["pre_filter_start"] - - 10, - :, - ] - ), - 1, - ) - mirror = mirror * np.exp(1.0j * np.angle(input_high_end)) - - return np.where( - np.abs(input_high_end) <= np.abs(mirror), input_high_end, mirror - ) - - if "mirroring2" == a: - mirror = np.flip( - np.abs( - spec_m[ - :, - mp.param["pre_filter_start"] - - 10 - - input_high_end.shape[1] : mp.param["pre_filter_start"] - - 10, - :, - ] - ), - 1, - ) - mi = np.multiply(mirror, input_high_end * 1.7) - - return np.where(np.abs(input_high_end) <= np.abs(mi), input_high_end, mi) - - -def ensembling(a, specs): - for i in range(1, len(specs)): - if i == 1: - spec = specs[0] - - ln = min([spec.shape[2], specs[i].shape[2]]) - spec = spec[:, :, :ln] - specs[i] = specs[i][:, :, :ln] - - if "min_mag" == a: - spec = np.where(np.abs(specs[i]) <= np.abs(spec), specs[i], spec) - if "max_mag" == a: - spec = np.where(np.abs(specs[i]) >= np.abs(spec), specs[i], spec) - - return spec - - -def stft(wave, nfft, hl): - wave_left = np.asfortranarray(wave[0]) - wave_right = np.asfortranarray(wave[1]) - spec_left = librosa.stft(wave_left, nfft, hop_length=hl) - spec_right = librosa.stft(wave_right, nfft, hop_length=hl) - spec = np.asfortranarray([spec_left, spec_right]) - - return spec - - -def istft(spec, hl): - spec_left = np.asfortranarray(spec[0]) - spec_right = np.asfortranarray(spec[1]) - - wave_left = librosa.istft(spec_left, hop_length=hl) - wave_right = librosa.istft(spec_right, hop_length=hl) - wave = np.asfortranarray([wave_left, wave_right]) - - -if __name__ == "__main__": - import cv2 - import time - import argparse - from model_param_init import ModelParameters - - p = argparse.ArgumentParser() - p.add_argument( - "--algorithm", - "-a", - type=str, - choices=["invert", "invert_p", "min_mag", "max_mag", "deep", "align"], - default="min_mag", - ) - p.add_argument( - "--model_params", - "-m", - type=str, - default=os.path.join("modelparams", "1band_sr44100_hl512.json"), - ) - p.add_argument("--output_name", "-o", type=str, default="output") - p.add_argument("--vocals_only", "-v", action="store_true") - p.add_argument("input", nargs="+") - args = p.parse_args() - - start_time = time.time() - - if args.algorithm.startswith("invert") and len(args.input) != 2: - raise ValueError("There should be two input files.") - - if not args.algorithm.startswith("invert") and len(args.input) < 2: - raise ValueError("There must be at least two input files.") - - wave, specs = {}, {} - mp = ModelParameters(args.model_params) - - for i in range(len(args.input)): - spec = {} - - for d in range(len(mp.param["band"]), 0, -1): - bp = mp.param["band"][d] - - if d == len(mp.param["band"]): # high-end band - wave[d], _ = librosa.load( - args.input[i], - bp["sr"], - False, - dtype=np.float32, - res_type=bp["res_type"], - ) - - if len(wave[d].shape) == 1: # mono to stereo - wave[d] = np.array([wave[d], wave[d]]) - else: # lower bands - wave[d] = librosa.resample( - wave[d + 1], - mp.param["band"][d + 1]["sr"], - bp["sr"], - res_type=bp["res_type"], - ) - - spec[d] = wave_to_spectrogram( - wave[d], - bp["hl"], - bp["n_fft"], - mp.param["mid_side"], - mp.param["mid_side_b2"], - mp.param["reverse"], - ) - - specs[i] = combine_spectrograms(spec, mp) - - del wave - - if args.algorithm == "deep": - d_spec = np.where(np.abs(specs[0]) <= np.abs(spec[1]), specs[0], spec[1]) - v_spec = d_spec - specs[1] - sf.write( - os.path.join("{}.wav".format(args.output_name)), - cmb_spectrogram_to_wave(v_spec, mp), - mp.param["sr"], - ) - - if args.algorithm.startswith("invert"): - ln = min([specs[0].shape[2], specs[1].shape[2]]) - specs[0] = specs[0][:, :, :ln] - specs[1] = specs[1][:, :, :ln] - - if "invert_p" == args.algorithm: - X_mag = np.abs(specs[0]) - y_mag = np.abs(specs[1]) - max_mag = np.where(X_mag >= y_mag, X_mag, y_mag) - v_spec = specs[1] - max_mag * np.exp(1.0j * np.angle(specs[0])) - else: - specs[1] = reduce_vocal_aggressively(specs[0], specs[1], 0.2) - v_spec = specs[0] - specs[1] - - if not args.vocals_only: - X_mag = np.abs(specs[0]) - y_mag = np.abs(specs[1]) - v_mag = np.abs(v_spec) - - X_image = spectrogram_to_image(X_mag) - y_image = spectrogram_to_image(y_mag) - v_image = spectrogram_to_image(v_mag) - - cv2.imwrite("{}_X.png".format(args.output_name), X_image) - cv2.imwrite("{}_y.png".format(args.output_name), y_image) - cv2.imwrite("{}_v.png".format(args.output_name), v_image) - - sf.write( - "{}_X.wav".format(args.output_name), - cmb_spectrogram_to_wave(specs[0], mp), - mp.param["sr"], - ) - sf.write( - "{}_y.wav".format(args.output_name), - cmb_spectrogram_to_wave(specs[1], mp), - mp.param["sr"], - ) - - sf.write( - "{}_v.wav".format(args.output_name), - cmb_spectrogram_to_wave(v_spec, mp), - mp.param["sr"], - ) - else: - if not args.algorithm == "deep": - sf.write( - os.path.join("ensembled", "{}.wav".format(args.output_name)), - cmb_spectrogram_to_wave(ensembling(args.algorithm, specs), mp), - mp.param["sr"], - ) - - if args.algorithm == "align": - trackalignment = [ - { - "file1": '"{}"'.format(args.input[0]), - "file2": '"{}"'.format(args.input[1]), - } - ] - - for i, e in tqdm(enumerate(trackalignment), desc="Performing Alignment..."): - os.system(f"python lib/align_tracks.py {e['file1']} {e['file2']}") - - # print('Total time: {0:.{1}f}s'.format(time.time() - start_time, 1)) diff --git a/spaces/LightChen2333/OpenSLU/common/config.py b/spaces/LightChen2333/OpenSLU/common/config.py deleted file mode 100644 index 9563ea6ffa6a75095e61a872db5b4fcd6f2e9d65..0000000000000000000000000000000000000000 --- a/spaces/LightChen2333/OpenSLU/common/config.py +++ /dev/null @@ -1,192 +0,0 @@ -''' -Author: Qiguang Chen -Date: 2023-01-11 10:39:26 -LastEditors: Qiguang Chen -LastEditTime: 2023-02-15 17:58:53 -Description: Configuration class to manage all process in OpenSLU like model construction, learning processing and so on. - -''' -import re - -from ruamel import yaml -import datetime - -class Config(dict): - def __init__(self, *args, **kwargs): - """ init with dict as args - """ - dict.__init__(self, *args, **kwargs) - self.__dict__ = self - self.start_time = datetime.datetime.now().strftime('%Y%m%d%H%M%S%f') - if not self.model.get("_from_pretrained_"): - self.__autowired() - - @staticmethod - def load_from_yaml(file_path:str)->"Config": - """load config files with path - - Args: - file_path (str): yaml configuration file path. - - Returns: - Config: config object. - """ - with open(file_path) as stream: - try: - return Config(yaml.safe_load(stream)) - except yaml.YAMLError as exc: - print(exc) - - @staticmethod - def load_from_args(args)->"Config": - """ load args to replace item value in config files assigned with '--config_path' or '--model' - - Args: - args (Any): args with command line. - - Returns: - Config: _description_ - """ - if args.model is not None and args.dataset is not None: - args.config_path = f"config/reproduction/{args.dataset}/{args.model}.yaml" - config = Config.load_from_yaml(args.config_path) - if args.dataset is not None: - config.__update_dataset(args.dataset) - if args.device is not None: - config["base"]["device"] = args.device - if args.learning_rate is not None: - config["optimizer"]["lr"] = args.learning_rate - if args.epoch_num is not None: - config["base"]["epoch_num"] = args.epoch_num - return config - - def autoload_template(self): - """ search '{*}' template to excute as python code, support replace variable as any configure item - """ - self.__autoload_template(self.__dict__) - - def __get_autoload_value(self, matched): - keys = matched.group()[1:-1].split(".") - temp = self.__dict__ - for k in keys: - temp = temp[k] - return str(temp) - - def __autoload_template(self, config:dict): - for k in config: - if isinstance(config, dict): - sub_config = config[k] - elif isinstance(config, list): - sub_config = k - else: - continue - if isinstance(sub_config, dict) or isinstance(sub_config, list): - self.__autoload_template(sub_config) - if isinstance(sub_config, str) and "{" in sub_config and "}" in sub_config: - res = re.sub(r'{.*?}', self.__get_autoload_value, config[k]) - res_dict= {"res": None} - exec("res=" + res, res_dict) - config[k] = res_dict["res"] - - def __update_dataset(self, dataset_name): - if dataset_name is not None and isinstance(dataset_name, str): - self.__dict__["dataset"]["dataset_name"] = dataset_name - - def get_model_config(self): - return self.__dict__["model"] - - def __autowired(self): - # Set encoder - encoder_config = self.__dict__["model"]["encoder"] - encoder_type = encoder_config["_model_target_"].split(".")[-1] - - def get_output_dim(encoder_config): - encoder_type = encoder_config["_model_target_"].split(".")[-1] - if (encoder_type == "AutoEncoder" and encoder_config["encoder_name"] in ["lstm", "self-attention-lstm", - "bi-encoder"]) or encoder_type == "NoPretrainedEncoder": - output_dim = 0 - if encoder_config.get("lstm"): - output_dim += encoder_config["lstm"]["output_dim"] - if encoder_config.get("attention"): - output_dim += encoder_config["attention"]["output_dim"] - return output_dim - else: - return encoder_config["output_dim"] - - if encoder_type == "BiEncoder": - output_dim = get_output_dim(encoder_config["intent_encoder"]) + \ - get_output_dim(encoder_config["slot_encoder"]) - else: - output_dim = get_output_dim(encoder_config) - self.__dict__["model"]["encoder"]["output_dim"] = output_dim - - # Set interaction - if "interaction" in self.__dict__["model"]["decoder"] and self.__dict__["model"]["decoder"]["interaction"].get( - "input_dim") is None: - self.__dict__["model"]["decoder"]["interaction"]["input_dim"] = output_dim - interaction_type = self.__dict__["model"]["decoder"]["interaction"]["_model_target_"].split(".")[-1] - if not ((encoder_type == "AutoEncoder" and encoder_config[ - "encoder_name"] == "self-attention-lstm") or encoder_type == "SelfAttentionLSTMEncoder") and interaction_type != "BiModelWithoutDecoderInteraction": - output_dim = self.__dict__["model"]["decoder"]["interaction"]["output_dim"] - - # Set classifier - if "slot_classifier" in self.__dict__["model"]["decoder"]: - if self.__dict__["model"]["decoder"]["slot_classifier"].get("input_dim") is None: - self.__dict__["model"]["decoder"]["slot_classifier"]["input_dim"] = output_dim - self.__dict__["model"]["decoder"]["slot_classifier"]["use_slot"] = True - if "intent_classifier" in self.__dict__["model"]["decoder"]: - if self.__dict__["model"]["decoder"]["intent_classifier"].get("input_dim") is None: - self.__dict__["model"]["decoder"]["intent_classifier"]["input_dim"] = output_dim - self.__dict__["model"]["decoder"]["intent_classifier"]["use_intent"] = True - - def get_intent_label_num(self): - """ get the number of intent labels. - """ - classifier_conf = self.__dict__["model"]["decoder"]["intent_classifier"] - return classifier_conf["intent_label_num"] if "intent_label_num" in classifier_conf else 0 - - def get_slot_label_num(self): - """ get the number of slot labels. - """ - classifier_conf = self.__dict__["model"]["decoder"]["slot_classifier"] - return classifier_conf["slot_label_num"] if "slot_label_num" in classifier_conf else 0 - - def set_intent_label_num(self, intent_label_num): - """ set the number of intent labels. - - Args: - slot_label_num (int): the number of intent label - """ - self.__dict__["base"]["intent_label_num"] = intent_label_num - self.__dict__["model"]["decoder"]["intent_classifier"]["intent_label_num"] = intent_label_num - if "interaction" in self.__dict__["model"]["decoder"]: - - self.__dict__["model"]["decoder"]["interaction"]["intent_label_num"] = intent_label_num - if self.__dict__["model"]["decoder"]["interaction"]["_model_target_"].split(".")[ - -1] == "StackInteraction": - self.__dict__["model"]["decoder"]["slot_classifier"]["input_dim"] += intent_label_num - - - def set_slot_label_num(self, slot_label_num:int)->None: - """set the number of slot label - - Args: - slot_label_num (int): the number of slot label - """ - self.__dict__["base"]["slot_label_num"] = slot_label_num - self.__dict__["model"]["decoder"]["slot_classifier"]["slot_label_num"] = slot_label_num - if "interaction" in self.__dict__["model"]["decoder"]: - self.__dict__["model"]["decoder"]["interaction"]["slot_label_num"] = slot_label_num - - def set_vocab_size(self, vocab_size): - """set the size of vocabulary in non-pretrained tokenizer - Args: - slot_label_num (int): the number of slot label - """ - encoder_type = self.__dict__["model"]["encoder"]["_model_target_"].split(".")[-1] - encoder_name = self.__dict__["model"]["encoder"].get("encoder_name") - if encoder_type == "BiEncoder" or (encoder_type == "AutoEncoder" and encoder_name == "bi-encoder"): - self.__dict__["model"]["encoder"]["intent_encoder"]["embedding"]["vocab_size"] = vocab_size - self.__dict__["model"]["encoder"]["slot_encoder"]["embedding"]["vocab_size"] = vocab_size - elif self.__dict__["model"]["encoder"].get("embedding"): - self.__dict__["model"]["encoder"]["embedding"]["vocab_size"] = vocab_size diff --git a/spaces/LittleYuan/My-Real-Bot/inference_realesrgan.py b/spaces/LittleYuan/My-Real-Bot/inference_realesrgan.py deleted file mode 100644 index 6d5ff4d188faaa16c0131be69a08fd22fb608f80..0000000000000000000000000000000000000000 --- a/spaces/LittleYuan/My-Real-Bot/inference_realesrgan.py +++ /dev/null @@ -1,128 +0,0 @@ -import argparse -import cv2 -import glob -import os -from basicsr.archs.rrdbnet_arch import RRDBNet - -from realesrgan import RealESRGANer -from realesrgan.archs.srvgg_arch import SRVGGNetCompact - - -def main(): - """Inference demo for Real-ESRGAN. - """ - parser = argparse.ArgumentParser() - parser.add_argument('-i', '--input', type=str, default='inputs', help='Input image or folder') - parser.add_argument( - '-n', - '--model_name', - type=str, - default='RealESRGAN_x4plus', - help=('Model names: RealESRGAN_x4plus | RealESRNet_x4plus | RealESRGAN_x4plus_anime_6B | RealESRGAN_x2plus' - 'RealESRGANv2-anime-xsx2 | RealESRGANv2-animevideo-xsx2-nousm | RealESRGANv2-animevideo-xsx2' - 'RealESRGANv2-anime-xsx4 | RealESRGANv2-animevideo-xsx4-nousm | RealESRGANv2-animevideo-xsx4')) - parser.add_argument('-o', '--output', type=str, default='results', help='Output folder') - parser.add_argument('-s', '--outscale', type=float, default=4, help='The final upsampling scale of the image') - parser.add_argument('--suffix', type=str, default='out', help='Suffix of the restored image') - parser.add_argument('-t', '--tile', type=int, default=0, help='Tile size, 0 for no tile during testing') - parser.add_argument('--tile_pad', type=int, default=10, help='Tile padding') - parser.add_argument('--pre_pad', type=int, default=0, help='Pre padding size at each border') - parser.add_argument('--face_enhance', action='store_true', help='Use GFPGAN to enhance face') - parser.add_argument('--half', action='store_true', help='Use half precision during inference') - parser.add_argument( - '--alpha_upsampler', - type=str, - default='realesrgan', - help='The upsampler for the alpha channels. Options: realesrgan | bicubic') - parser.add_argument( - '--ext', - type=str, - default='auto', - help='Image extension. Options: auto | jpg | png, auto means using the same extension as inputs') - args = parser.parse_args() - - # determine models according to model names - args.model_name = args.model_name.split('.')[0] - if args.model_name in ['RealESRGAN_x4plus', 'RealESRNet_x4plus']: # x4 RRDBNet model - model = RRDBNet(num_in_ch=3, num_out_ch=3, num_feat=64, num_block=23, num_grow_ch=32, scale=4) - netscale = 4 - elif args.model_name in ['RealESRGAN_x4plus_anime_6B']: # x4 RRDBNet model with 6 blocks - model = RRDBNet(num_in_ch=3, num_out_ch=3, num_feat=64, num_block=6, num_grow_ch=32, scale=4) - netscale = 4 - elif args.model_name in ['RealESRGAN_x2plus']: # x2 RRDBNet model - model = RRDBNet(num_in_ch=3, num_out_ch=3, num_feat=64, num_block=23, num_grow_ch=32, scale=2) - netscale = 2 - elif args.model_name in [ - 'RealESRGANv2-anime-xsx2', 'RealESRGANv2-animevideo-xsx2-nousm', 'RealESRGANv2-animevideo-xsx2' - ]: # x2 VGG-style model (XS size) - model = SRVGGNetCompact(num_in_ch=3, num_out_ch=3, num_feat=64, num_conv=16, upscale=2, act_type='prelu') - netscale = 2 - elif args.model_name in [ - 'RealESRGANv2-anime-xsx4', 'RealESRGANv2-animevideo-xsx4-nousm', 'RealESRGANv2-animevideo-xsx4' - ]: # x4 VGG-style model (XS size) - model = SRVGGNetCompact(num_in_ch=3, num_out_ch=3, num_feat=64, num_conv=16, upscale=4, act_type='prelu') - netscale = 4 - - # determine model paths - model_path = os.path.join('.', args.model_name + '.pth') - if not os.path.isfile(model_path): - model_path = os.path.join('.', args.model_name + '.pth') - if not os.path.isfile(model_path): - raise ValueError(f'Model {args.model_name} does not exist.') - - # restorer - upsampler = RealESRGANer( - scale=netscale, - model_path=model_path, - model=model, - tile=args.tile, - tile_pad=args.tile_pad, - pre_pad=args.pre_pad, - half=args.half) - - if args.face_enhance: # Use GFPGAN for face enhancement - from gfpgan import GFPGANer - face_enhancer = GFPGANer( - model_path='https://github.com/TencentARC/GFPGAN/releases/download/v0.2.0/GFPGANCleanv1-NoCE-C2.pth', - upscale=args.outscale, - arch='clean', - channel_multiplier=2, - bg_upsampler=upsampler) - os.makedirs(args.output, exist_ok=True) - - if os.path.isfile(args.input): - paths = [args.input] - else: - paths = sorted(glob.glob(os.path.join(args.input, '*'))) - - for idx, path in enumerate(paths): - imgname, extension = os.path.splitext(os.path.basename(path)) - print('Testing', idx, imgname) - - img = cv2.imread(path, cv2.IMREAD_UNCHANGED) - if len(img.shape) == 3 and img.shape[2] == 4: - img_mode = 'RGBA' - else: - img_mode = None - - try: - if args.face_enhance: - _, _, output = face_enhancer.enhance(img, has_aligned=False, only_center_face=False, paste_back=True) - else: - output, _ = upsampler.enhance(img, outscale=args.outscale) - except RuntimeError as error: - print('Error', error) - print('If you encounter CUDA out of memory, try to set --tile with a smaller number.') - else: - if args.ext == 'auto': - extension = extension[1:] - else: - extension = args.ext - if img_mode == 'RGBA': # RGBA images should be saved in png format - extension = 'png' - save_path = os.path.join(args.output, f'{imgname}_{args.suffix}.{extension}') - cv2.imwrite(save_path, output) - - -if __name__ == '__main__': - main() diff --git "a/spaces/Liu-LAB/GPT-academic/crazy_functions/\344\270\213\350\275\275arxiv\350\256\272\346\226\207\347\277\273\350\257\221\346\221\230\350\246\201.py" "b/spaces/Liu-LAB/GPT-academic/crazy_functions/\344\270\213\350\275\275arxiv\350\256\272\346\226\207\347\277\273\350\257\221\346\221\230\350\246\201.py" deleted file mode 100644 index 46275bf4fed59ca5692581ac9b354c4d4ad91d7c..0000000000000000000000000000000000000000 --- "a/spaces/Liu-LAB/GPT-academic/crazy_functions/\344\270\213\350\275\275arxiv\350\256\272\346\226\207\347\277\273\350\257\221\346\221\230\350\246\201.py" +++ /dev/null @@ -1,194 +0,0 @@ -from toolbox import update_ui -from toolbox import CatchException, report_execption, write_results_to_file, get_conf -import re, requests, unicodedata, os -from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive -def download_arxiv_(url_pdf): - if 'arxiv.org' not in url_pdf: - if ('.' in url_pdf) and ('/' not in url_pdf): - new_url = 'https://arxiv.org/abs/'+url_pdf - print('下载编号:', url_pdf, '自动定位:', new_url) - # download_arxiv_(new_url) - return download_arxiv_(new_url) - else: - print('不能识别的URL!') - return None - if 'abs' in url_pdf: - url_pdf = url_pdf.replace('abs', 'pdf') - url_pdf = url_pdf + '.pdf' - - url_abs = url_pdf.replace('.pdf', '').replace('pdf', 'abs') - title, other_info = get_name(_url_=url_abs) - - paper_id = title.split()[0] # '[1712.00559]' - if '2' in other_info['year']: - title = other_info['year'] + ' ' + title - - known_conf = ['NeurIPS', 'NIPS', 'Nature', 'Science', 'ICLR', 'AAAI'] - for k in known_conf: - if k in other_info['comment']: - title = k + ' ' + title - - download_dir = './gpt_log/arxiv/' - os.makedirs(download_dir, exist_ok=True) - - title_str = title.replace('?', '?')\ - .replace(':', ':')\ - .replace('\"', '“')\ - .replace('\n', '')\ - .replace(' ', ' ')\ - .replace(' ', ' ') - - requests_pdf_url = url_pdf - file_path = download_dir+title_str - # if os.path.exists(file_path): - # print('返回缓存文件') - # return './gpt_log/arxiv/'+title_str - - print('下载中') - proxies, = get_conf('proxies') - r = requests.get(requests_pdf_url, proxies=proxies) - with open(file_path, 'wb+') as f: - f.write(r.content) - print('下载完成') - - # print('输出下载命令:','aria2c -o \"%s\" %s'%(title_str,url_pdf)) - # subprocess.call('aria2c --all-proxy=\"172.18.116.150:11084\" -o \"%s\" %s'%(download_dir+title_str,url_pdf), shell=True) - - x = "%s %s %s.bib" % (paper_id, other_info['year'], other_info['authors']) - x = x.replace('?', '?')\ - .replace(':', ':')\ - .replace('\"', '“')\ - .replace('\n', '')\ - .replace(' ', ' ')\ - .replace(' ', ' ') - return './gpt_log/arxiv/'+title_str, other_info - - -def get_name(_url_): - import os - from bs4 import BeautifulSoup - print('正在获取文献名!') - print(_url_) - - # arxiv_recall = {} - # if os.path.exists('./arxiv_recall.pkl'): - # with open('./arxiv_recall.pkl', 'rb') as f: - # arxiv_recall = pickle.load(f) - - # if _url_ in arxiv_recall: - # print('在缓存中') - # return arxiv_recall[_url_] - - proxies, = get_conf('proxies') - res = requests.get(_url_, proxies=proxies) - - bs = BeautifulSoup(res.text, 'html.parser') - other_details = {} - - # get year - try: - year = bs.find_all(class_='dateline')[0].text - year = re.search(r'(\d{4})', year, re.M | re.I).group(1) - other_details['year'] = year - abstract = bs.find_all(class_='abstract mathjax')[0].text - other_details['abstract'] = abstract - except: - other_details['year'] = '' - print('年份获取失败') - - # get author - try: - authors = bs.find_all(class_='authors')[0].text - authors = authors.split('Authors:')[1] - other_details['authors'] = authors - except: - other_details['authors'] = '' - print('authors获取失败') - - # get comment - try: - comment = bs.find_all(class_='metatable')[0].text - real_comment = None - for item in comment.replace('\n', ' ').split(' '): - if 'Comments' in item: - real_comment = item - if real_comment is not None: - other_details['comment'] = real_comment - else: - other_details['comment'] = '' - except: - other_details['comment'] = '' - print('年份获取失败') - - title_str = BeautifulSoup( - res.text, 'html.parser').find('title').contents[0] - print('获取成功:', title_str) - # arxiv_recall[_url_] = (title_str+'.pdf', other_details) - # with open('./arxiv_recall.pkl', 'wb') as f: - # pickle.dump(arxiv_recall, f) - - return title_str+'.pdf', other_details - - - -@CatchException -def 下载arxiv论文并翻译摘要(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): - - CRAZY_FUNCTION_INFO = "下载arxiv论文并翻译摘要,函数插件作者[binary-husky]。正在提取摘要并下载PDF文档……" - import glob - import os - - # 基本信息:功能、贡献者 - chatbot.append(["函数插件功能?", CRAZY_FUNCTION_INFO]) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - - # 尝试导入依赖,如果缺少依赖,则给出安装建议 - try: - import bs4 - except: - report_execption(chatbot, history, - a = f"解析项目: {txt}", - b = f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade beautifulsoup4```。") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - - # 清空历史,以免输入溢出 - history = [] - - # 提取摘要,下载PDF文档 - try: - pdf_path, info = download_arxiv_(txt) - except: - report_execption(chatbot, history, - a = f"解析项目: {txt}", - b = f"下载pdf文件未成功") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - - # 翻译摘要等 - i_say = f"请你阅读以下学术论文相关的材料,提取摘要,翻译为中文。材料如下:{str(info)}" - i_say_show_user = f'请你阅读以下学术论文相关的材料,提取摘要,翻译为中文。论文:{pdf_path}' - chatbot.append((i_say_show_user, "[Local Message] waiting gpt response.")) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - msg = '正常' - # ** gpt request ** - # 单线,获取文章meta信息 - gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive( - inputs=i_say, - inputs_show_user=i_say_show_user, - llm_kwargs=llm_kwargs, - chatbot=chatbot, history=[], - sys_prompt="Your job is to collect information from materials and translate to Chinese。", - ) - - chatbot[-1] = (i_say_show_user, gpt_say) - history.append(i_say_show_user); history.append(gpt_say) - yield from update_ui(chatbot=chatbot, history=history, msg=msg) # 刷新界面 - # 写入文件 - import shutil - # 重置文件的创建时间 - shutil.copyfile(pdf_path, f'./gpt_log/{os.path.basename(pdf_path)}'); os.remove(pdf_path) - res = write_results_to_file(history) - chatbot.append(("完成了吗?", res + "\n\nPDF文件也已经下载")) - yield from update_ui(chatbot=chatbot, history=history, msg=msg) # 刷新界面 - diff --git a/spaces/LucasCodeBreak/MusicGen/audiocraft/models/loaders.py b/spaces/LucasCodeBreak/MusicGen/audiocraft/models/loaders.py deleted file mode 100644 index 97c662c3212b7695669cbfc5214ff2f099c3f319..0000000000000000000000000000000000000000 --- a/spaces/LucasCodeBreak/MusicGen/audiocraft/models/loaders.py +++ /dev/null @@ -1,94 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -""" -Utility functions to load from the checkpoints. -Each checkpoint is a torch.saved dict with the following keys: -- 'xp.cfg': the hydra config as dumped during training. This should be used - to rebuild the object using the audiocraft.models.builders functions, -- 'model_best_state': a readily loadable best state for the model, including - the conditioner. The model obtained from `xp.cfg` should be compatible - with this state dict. In the case of a LM, the encodec model would not be - bundled along but instead provided separately. - -Those functions also support loading from a remote location with the Torch Hub API. -They also support overriding some parameters, in particular the device and dtype -of the returned model. -""" - -from pathlib import Path -from huggingface_hub import hf_hub_download -import typing as tp -import os - -from omegaconf import OmegaConf -import torch - -from . import builders - - -HF_MODEL_CHECKPOINTS_MAP = { - "small": "facebook/musicgen-small", - "medium": "facebook/musicgen-medium", - "large": "facebook/musicgen-large", - "melody": "facebook/musicgen-melody", -} - - -def _get_state_dict( - file_or_url_or_id: tp.Union[Path, str], - filename: tp.Optional[str] = None, - device='cpu', - cache_dir: tp.Optional[str] = None, -): - # Return the state dict either from a file or url - file_or_url_or_id = str(file_or_url_or_id) - assert isinstance(file_or_url_or_id, str) - - if os.path.isfile(file_or_url_or_id): - return torch.load(file_or_url_or_id, map_location=device) - - if os.path.isdir(file_or_url_or_id): - file = f"{file_or_url_or_id}/{filename}" - return torch.load(file, map_location=device) - - elif file_or_url_or_id.startswith('https://'): - return torch.hub.load_state_dict_from_url(file_or_url_or_id, map_location=device, check_hash=True) - - elif file_or_url_or_id in HF_MODEL_CHECKPOINTS_MAP: - assert filename is not None, "filename needs to be defined if using HF checkpoints" - - repo_id = HF_MODEL_CHECKPOINTS_MAP[file_or_url_or_id] - file = hf_hub_download(repo_id=repo_id, filename=filename, cache_dir=cache_dir) - return torch.load(file, map_location=device) - - else: - raise ValueError(f"{file_or_url_or_id} is not a valid name, path or link that can be loaded.") - - -def load_compression_model(file_or_url_or_id: tp.Union[Path, str], device='cpu', cache_dir: tp.Optional[str] = None): - pkg = _get_state_dict(file_or_url_or_id, filename="compression_state_dict.bin", cache_dir=cache_dir) - cfg = OmegaConf.create(pkg['xp.cfg']) - cfg.device = str(device) - model = builders.get_compression_model(cfg) - model.load_state_dict(pkg['best_state']) - model.eval() - return model - - -def load_lm_model(file_or_url_or_id: tp.Union[Path, str], device='cpu', cache_dir: tp.Optional[str] = None): - pkg = _get_state_dict(file_or_url_or_id, filename="state_dict.bin", cache_dir=cache_dir) - cfg = OmegaConf.create(pkg['xp.cfg']) - cfg.device = str(device) - if cfg.device == 'cpu': - cfg.dtype = 'float32' - else: - cfg.dtype = 'float16' - model = builders.get_lm_model(cfg) - model.load_state_dict(pkg['best_state']) - model.eval() - model.cfg = cfg - return model diff --git a/spaces/LuciaCw/greet/README.md b/spaces/LuciaCw/greet/README.md deleted file mode 100644 index aa8d62777bab00ebfbc805324a2bb394c7c87577..0000000000000000000000000000000000000000 --- a/spaces/LuciaCw/greet/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Greet -emoji: 🐨 -colorFrom: pink -colorTo: pink -sdk: gradio -sdk_version: 3.0.4 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/Mahiruoshi/Lovelive-Nijigasaku-Chat-iSTFT-GPT3/modules.py b/spaces/Mahiruoshi/Lovelive-Nijigasaku-Chat-iSTFT-GPT3/modules.py deleted file mode 100644 index 9c7fd9cd6eb8b7e0ec0e08957e970744a374a924..0000000000000000000000000000000000000000 --- a/spaces/Mahiruoshi/Lovelive-Nijigasaku-Chat-iSTFT-GPT3/modules.py +++ /dev/null @@ -1,390 +0,0 @@ -import copy -import math -import numpy as np -import scipy -import torch -from torch import nn -from torch.nn import functional as F - -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm - -import commons -from commons import init_weights, get_padding -from transforms import piecewise_rational_quadratic_transform - - -LRELU_SLOPE = 0.1 - - -class LayerNorm(nn.Module): - def __init__(self, channels, eps=1e-5): - super().__init__() - self.channels = channels - self.eps = eps - - self.gamma = nn.Parameter(torch.ones(channels)) - self.beta = nn.Parameter(torch.zeros(channels)) - - def forward(self, x): - x = x.transpose(1, -1) - x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps) - return x.transpose(1, -1) - - -class ConvReluNorm(nn.Module): - def __init__(self, in_channels, hidden_channels, out_channels, kernel_size, n_layers, p_dropout): - super().__init__() - self.in_channels = in_channels - self.hidden_channels = hidden_channels - self.out_channels = out_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - assert n_layers > 1, "Number of layers should be larger than 0." - - self.conv_layers = nn.ModuleList() - self.norm_layers = nn.ModuleList() - self.conv_layers.append(nn.Conv1d(in_channels, hidden_channels, kernel_size, padding=kernel_size//2)) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.relu_drop = nn.Sequential( - nn.ReLU(), - nn.Dropout(p_dropout)) - for _ in range(n_layers-1): - self.conv_layers.append(nn.Conv1d(hidden_channels, hidden_channels, kernel_size, padding=kernel_size//2)) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.proj = nn.Conv1d(hidden_channels, out_channels, 1) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask): - x_org = x - for i in range(self.n_layers): - x = self.conv_layers[i](x * x_mask) - x = self.norm_layers[i](x) - x = self.relu_drop(x) - x = x_org + self.proj(x) - return x * x_mask - - -class DDSConv(nn.Module): - """ - Dialted and Depth-Separable Convolution - """ - def __init__(self, channels, kernel_size, n_layers, p_dropout=0.): - super().__init__() - self.channels = channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - - self.drop = nn.Dropout(p_dropout) - self.convs_sep = nn.ModuleList() - self.convs_1x1 = nn.ModuleList() - self.norms_1 = nn.ModuleList() - self.norms_2 = nn.ModuleList() - for i in range(n_layers): - dilation = kernel_size ** i - padding = (kernel_size * dilation - dilation) // 2 - self.convs_sep.append(nn.Conv1d(channels, channels, kernel_size, - groups=channels, dilation=dilation, padding=padding - )) - self.convs_1x1.append(nn.Conv1d(channels, channels, 1)) - self.norms_1.append(LayerNorm(channels)) - self.norms_2.append(LayerNorm(channels)) - - def forward(self, x, x_mask, g=None): - if g is not None: - x = x + g - for i in range(self.n_layers): - y = self.convs_sep[i](x * x_mask) - y = self.norms_1[i](y) - y = F.gelu(y) - y = self.convs_1x1[i](y) - y = self.norms_2[i](y) - y = F.gelu(y) - y = self.drop(y) - x = x + y - return x * x_mask - - -class WN(torch.nn.Module): - def __init__(self, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=0, p_dropout=0): - super(WN, self).__init__() - assert(kernel_size % 2 == 1) - self.hidden_channels =hidden_channels - self.kernel_size = kernel_size, - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - self.p_dropout = p_dropout - - self.in_layers = torch.nn.ModuleList() - self.res_skip_layers = torch.nn.ModuleList() - self.drop = nn.Dropout(p_dropout) - - if gin_channels != 0: - cond_layer = torch.nn.Conv1d(gin_channels, 2*hidden_channels*n_layers, 1) - self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name='weight') - - for i in range(n_layers): - dilation = dilation_rate ** i - padding = int((kernel_size * dilation - dilation) / 2) - in_layer = torch.nn.Conv1d(hidden_channels, 2*hidden_channels, kernel_size, - dilation=dilation, padding=padding) - in_layer = torch.nn.utils.weight_norm(in_layer, name='weight') - self.in_layers.append(in_layer) - - # last one is not necessary - if i < n_layers - 1: - res_skip_channels = 2 * hidden_channels - else: - res_skip_channels = hidden_channels - - res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1) - res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name='weight') - self.res_skip_layers.append(res_skip_layer) - - def forward(self, x, x_mask, g=None, **kwargs): - output = torch.zeros_like(x) - n_channels_tensor = torch.IntTensor([self.hidden_channels]) - - if g is not None: - g = self.cond_layer(g) - - for i in range(self.n_layers): - x_in = self.in_layers[i](x) - if g is not None: - cond_offset = i * 2 * self.hidden_channels - g_l = g[:,cond_offset:cond_offset+2*self.hidden_channels,:] - else: - g_l = torch.zeros_like(x_in) - - acts = commons.fused_add_tanh_sigmoid_multiply( - x_in, - g_l, - n_channels_tensor) - acts = self.drop(acts) - - res_skip_acts = self.res_skip_layers[i](acts) - if i < self.n_layers - 1: - res_acts = res_skip_acts[:,:self.hidden_channels,:] - x = (x + res_acts) * x_mask - output = output + res_skip_acts[:,self.hidden_channels:,:] - else: - output = output + res_skip_acts - return output * x_mask - - def remove_weight_norm(self): - if self.gin_channels != 0: - torch.nn.utils.remove_weight_norm(self.cond_layer) - for l in self.in_layers: - torch.nn.utils.remove_weight_norm(l) - for l in self.res_skip_layers: - torch.nn.utils.remove_weight_norm(l) - - -class ResBlock1(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)): - super(ResBlock1, self).__init__() - self.convs1 = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[2], - padding=get_padding(kernel_size, dilation[2]))) - ]) - self.convs1.apply(init_weights) - - self.convs2 = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))) - ]) - self.convs2.apply(init_weights) - - def forward(self, x, x_mask=None): - for c1, c2 in zip(self.convs1, self.convs2): - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c1(xt) - xt = F.leaky_relu(xt, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c2(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs1: - remove_weight_norm(l) - for l in self.convs2: - remove_weight_norm(l) - - -class ResBlock2(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3)): - super(ResBlock2, self).__init__() - self.convs = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]))) - ]) - self.convs.apply(init_weights) - - def forward(self, x, x_mask=None): - for c in self.convs: - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs: - remove_weight_norm(l) - - -class Log(nn.Module): - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask - logdet = torch.sum(-y, [1, 2]) - return y, logdet - else: - x = torch.exp(x) * x_mask - return x - - -class Flip(nn.Module): - def forward(self, x, *args, reverse=False, **kwargs): - x = torch.flip(x, [1]) - if not reverse: - logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device) - return x, logdet - else: - return x - - -class ElementwiseAffine(nn.Module): - def __init__(self, channels): - super().__init__() - self.channels = channels - self.m = nn.Parameter(torch.zeros(channels,1)) - self.logs = nn.Parameter(torch.zeros(channels,1)) - - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = self.m + torch.exp(self.logs) * x - y = y * x_mask - logdet = torch.sum(self.logs * x_mask, [1,2]) - return y, logdet - else: - x = (x - self.m) * torch.exp(-self.logs) * x_mask - return x - - -class ResidualCouplingLayer(nn.Module): - def __init__(self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - p_dropout=0, - gin_channels=0, - mean_only=False): - assert channels % 2 == 0, "channels should be divisible by 2" - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.half_channels = channels // 2 - self.mean_only = mean_only - - self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1) - self.enc = WN(hidden_channels, kernel_size, dilation_rate, n_layers, p_dropout=p_dropout, gin_channels=gin_channels) - self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1) - self.post.weight.data.zero_() - self.post.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels]*2, 1) - h = self.pre(x0) * x_mask - h = self.enc(h, x_mask, g=g) - stats = self.post(h) * x_mask - if not self.mean_only: - m, logs = torch.split(stats, [self.half_channels]*2, 1) - else: - m = stats - logs = torch.zeros_like(m) - - if not reverse: - x1 = m + x1 * torch.exp(logs) * x_mask - x = torch.cat([x0, x1], 1) - logdet = torch.sum(logs, [1,2]) - return x, logdet - else: - x1 = (x1 - m) * torch.exp(-logs) * x_mask - x = torch.cat([x0, x1], 1) - return x - - -class ConvFlow(nn.Module): - def __init__(self, in_channels, filter_channels, kernel_size, n_layers, num_bins=10, tail_bound=5.0): - super().__init__() - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.num_bins = num_bins - self.tail_bound = tail_bound - self.half_channels = in_channels // 2 - - self.pre = nn.Conv1d(self.half_channels, filter_channels, 1) - self.convs = DDSConv(filter_channels, kernel_size, n_layers, p_dropout=0.) - self.proj = nn.Conv1d(filter_channels, self.half_channels * (num_bins * 3 - 1), 1) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels]*2, 1) - h = self.pre(x0) - h = self.convs(h, x_mask, g=g) - h = self.proj(h) * x_mask - - b, c, t = x0.shape - h = h.reshape(b, c, -1, t).permute(0, 1, 3, 2) # [b, cx?, t] -> [b, c, t, ?] - - unnormalized_widths = h[..., :self.num_bins] / math.sqrt(self.filter_channels) - unnormalized_heights = h[..., self.num_bins:2*self.num_bins] / math.sqrt(self.filter_channels) - unnormalized_derivatives = h[..., 2 * self.num_bins:] - - x1, logabsdet = piecewise_rational_quadratic_transform(x1, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=reverse, - tails='linear', - tail_bound=self.tail_bound - ) - - x = torch.cat([x0, x1], 1) * x_mask - logdet = torch.sum(logabsdet * x_mask, [1,2]) - if not reverse: - return x, logdet - else: - return x diff --git a/spaces/Mandy234/Mandy234-myQAmodel/app.py b/spaces/Mandy234/Mandy234-myQAmodel/app.py deleted file mode 100644 index 2d51dba780869cd5f2c03bd7788aa6c766e28460..0000000000000000000000000000000000000000 --- a/spaces/Mandy234/Mandy234-myQAmodel/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/Mandy234/myQAmodel").launch() \ No newline at end of file diff --git a/spaces/Manmay/tortoise-tts/tortoise/models/cvvp.py b/spaces/Manmay/tortoise-tts/tortoise/models/cvvp.py deleted file mode 100644 index 544ca47b21a31c8d26d4ea407b9783e7d59e8126..0000000000000000000000000000000000000000 --- a/spaces/Manmay/tortoise-tts/tortoise/models/cvvp.py +++ /dev/null @@ -1,142 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F -from torch import einsum - -from tortoise.models.arch_util import AttentionBlock -from tortoise.models.xtransformers import ContinuousTransformerWrapper, Encoder - - -def exists(val): - return val is not None - - -def masked_mean(t, mask): - t = t.masked_fill(~mask, 0.) - return t.sum(dim=1) / mask.sum(dim=1) - - -class CollapsingTransformer(nn.Module): - def __init__(self, model_dim, output_dims, heads, dropout, depth, mask_percentage=0, **encoder_kwargs): - super().__init__() - self.transformer = ContinuousTransformerWrapper( - max_seq_len=-1, - use_pos_emb=False, - attn_layers=Encoder( - dim=model_dim, - depth=depth, - heads=heads, - ff_dropout=dropout, - ff_mult=1, - attn_dropout=dropout, - use_rmsnorm=True, - ff_glu=True, - rotary_pos_emb=True, - **encoder_kwargs, - )) - self.pre_combiner = nn.Sequential(nn.Conv1d(model_dim, output_dims, 1), - AttentionBlock( - output_dims, num_heads=heads, do_checkpoint=False), - nn.Conv1d(output_dims, output_dims, 1)) - self.mask_percentage = mask_percentage - - def forward(self, x, **transformer_kwargs): - h = self.transformer(x, **transformer_kwargs) - h = h.permute(0, 2, 1) - h = self.pre_combiner(h).permute(0, 2, 1) - if self.training: - mask = torch.rand_like(h.float()) > self.mask_percentage - else: - mask = torch.ones_like(h.float()).bool() - return masked_mean(h, mask) - - -class ConvFormatEmbedding(nn.Module): - def __init__(self, *args, **kwargs): - super().__init__() - self.emb = nn.Embedding(*args, **kwargs) - - def forward(self, x): - y = self.emb(x) - return y.permute(0, 2, 1) - - -class CVVP(nn.Module): - def __init__( - self, - model_dim=512, - transformer_heads=8, - dropout=.1, - conditioning_enc_depth=8, - cond_mask_percentage=0, - mel_channels=80, - mel_codes=None, - speech_enc_depth=8, - speech_mask_percentage=0, - latent_multiplier=1, - ): - super().__init__() - latent_dim = latent_multiplier*model_dim - self.temperature = nn.Parameter(torch.tensor(1.)) - - self.cond_emb = nn.Sequential(nn.Conv1d(mel_channels, model_dim//2, kernel_size=5, stride=2, padding=2), - nn.Conv1d(model_dim//2, model_dim, kernel_size=3, stride=2, padding=1)) - self.conditioning_transformer = CollapsingTransformer( - model_dim, model_dim, transformer_heads, dropout, conditioning_enc_depth, cond_mask_percentage) - self.to_conditioning_latent = nn.Linear( - latent_dim, latent_dim, bias=False) - - if mel_codes is None: - self.speech_emb = nn.Conv1d( - mel_channels, model_dim, kernel_size=5, padding=2) - else: - self.speech_emb = ConvFormatEmbedding(mel_codes, model_dim) - self.speech_transformer = CollapsingTransformer( - model_dim, latent_dim, transformer_heads, dropout, speech_enc_depth, speech_mask_percentage) - self.to_speech_latent = nn.Linear( - latent_dim, latent_dim, bias=False) - - def get_grad_norm_parameter_groups(self): - return { - 'conditioning': list(self.conditioning_transformer.parameters()), - 'speech': list(self.speech_transformer.parameters()), - } - - def forward( - self, - mel_cond, - mel_input, - return_loss=False - ): - cond_emb = self.cond_emb(mel_cond).permute(0, 2, 1) - enc_cond = self.conditioning_transformer(cond_emb) - cond_latents = self.to_conditioning_latent(enc_cond) - - speech_emb = self.speech_emb(mel_input).permute(0, 2, 1) - enc_speech = self.speech_transformer(speech_emb) - speech_latents = self.to_speech_latent(enc_speech) - - cond_latents, speech_latents = map(lambda t: F.normalize( - t, p=2, dim=-1), (cond_latents, speech_latents)) - temp = self.temperature.exp() - - if not return_loss: - sim = einsum('n d, n d -> n', cond_latents, - speech_latents) * temp - return sim - - sim = einsum('i d, j d -> i j', cond_latents, - speech_latents) * temp - labels = torch.arange( - cond_latents.shape[0], device=mel_input.device) - loss = (F.cross_entropy(sim, labels) + - F.cross_entropy(sim.t(), labels)) / 2 - - return loss - - -if __name__ == '__main__': - clvp = CVVP() - clvp(torch.randn(2, 80, 100), - torch.randn(2, 80, 95), - return_loss=True) diff --git a/spaces/MarcusSu1216/XingTong/hubert/__init__.py b/spaces/MarcusSu1216/XingTong/hubert/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/MarcusSu1216/XingTong/models.py b/spaces/MarcusSu1216/XingTong/models.py deleted file mode 100644 index 13278d680493970f5a670cf3fc955a6e9b7ab1d5..0000000000000000000000000000000000000000 --- a/spaces/MarcusSu1216/XingTong/models.py +++ /dev/null @@ -1,420 +0,0 @@ -import copy -import math -import torch -from torch import nn -from torch.nn import functional as F - -import modules.attentions as attentions -import modules.commons as commons -import modules.modules as modules - -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm - -import utils -from modules.commons import init_weights, get_padding -from vdecoder.hifigan.models import Generator -from utils import f0_to_coarse - -class ResidualCouplingBlock(nn.Module): - def __init__(self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - n_flows=4, - gin_channels=0): - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.flows = nn.ModuleList() - for i in range(n_flows): - self.flows.append(modules.ResidualCouplingLayer(channels, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels, mean_only=True)) - self.flows.append(modules.Flip()) - - def forward(self, x, x_mask, g=None, reverse=False): - if not reverse: - for flow in self.flows: - x, _ = flow(x, x_mask, g=g, reverse=reverse) - else: - for flow in reversed(self.flows): - x = flow(x, x_mask, g=g, reverse=reverse) - return x - - -class Encoder(nn.Module): - def __init__(self, - in_channels, - out_channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - - self.pre = nn.Conv1d(in_channels, hidden_channels, 1) - self.enc = modules.WN(hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths, g=None): - # print(x.shape,x_lengths.shape) - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype) - x = self.pre(x) * x_mask - x = self.enc(x, x_mask, g=g) - stats = self.proj(x) * x_mask - m, logs = torch.split(stats, self.out_channels, dim=1) - z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask - return z, m, logs, x_mask - - -class TextEncoder(nn.Module): - def __init__(self, - out_channels, - hidden_channels, - kernel_size, - n_layers, - gin_channels=0, - filter_channels=None, - n_heads=None, - p_dropout=None): - super().__init__() - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.gin_channels = gin_channels - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - self.f0_emb = nn.Embedding(256, hidden_channels) - - self.enc_ = attentions.Encoder( - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout) - - def forward(self, x, x_mask, f0=None, noice_scale=1): - x = x + self.f0_emb(f0).transpose(1,2) - x = self.enc_(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - m, logs = torch.split(stats, self.out_channels, dim=1) - z = (m + torch.randn_like(m) * torch.exp(logs) * noice_scale) * x_mask - - return z, m, logs, x_mask - - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False): - super(DiscriminatorP, self).__init__() - self.period = period - self.use_spectral_norm = use_spectral_norm - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv2d(1, 32, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(32, 128, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(128, 512, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(512, 1024, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(1024, 1024, (kernel_size, 1), 1, padding=(get_padding(kernel_size, 1), 0))), - ]) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x): - fmap = [] - - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class DiscriminatorS(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(DiscriminatorS, self).__init__() - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv1d(1, 16, 15, 1, padding=7)), - norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)), - norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)), - norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)), - norm_f(Conv1d(1024, 1024, 5, 1, padding=2)), - ]) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x): - fmap = [] - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class MultiPeriodDiscriminator(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminator, self).__init__() - periods = [2,3,5,7,11] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -class SpeakerEncoder(torch.nn.Module): - def __init__(self, mel_n_channels=80, model_num_layers=3, model_hidden_size=256, model_embedding_size=256): - super(SpeakerEncoder, self).__init__() - self.lstm = nn.LSTM(mel_n_channels, model_hidden_size, model_num_layers, batch_first=True) - self.linear = nn.Linear(model_hidden_size, model_embedding_size) - self.relu = nn.ReLU() - - def forward(self, mels): - self.lstm.flatten_parameters() - _, (hidden, _) = self.lstm(mels) - embeds_raw = self.relu(self.linear(hidden[-1])) - return embeds_raw / torch.norm(embeds_raw, dim=1, keepdim=True) - - def compute_partial_slices(self, total_frames, partial_frames, partial_hop): - mel_slices = [] - for i in range(0, total_frames-partial_frames, partial_hop): - mel_range = torch.arange(i, i+partial_frames) - mel_slices.append(mel_range) - - return mel_slices - - def embed_utterance(self, mel, partial_frames=128, partial_hop=64): - mel_len = mel.size(1) - last_mel = mel[:,-partial_frames:] - - if mel_len > partial_frames: - mel_slices = self.compute_partial_slices(mel_len, partial_frames, partial_hop) - mels = list(mel[:,s] for s in mel_slices) - mels.append(last_mel) - mels = torch.stack(tuple(mels), 0).squeeze(1) - - with torch.no_grad(): - partial_embeds = self(mels) - embed = torch.mean(partial_embeds, axis=0).unsqueeze(0) - #embed = embed / torch.linalg.norm(embed, 2) - else: - with torch.no_grad(): - embed = self(last_mel) - - return embed - -class F0Decoder(nn.Module): - def __init__(self, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - spk_channels=0): - super().__init__() - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.spk_channels = spk_channels - - self.prenet = nn.Conv1d(hidden_channels, hidden_channels, 3, padding=1) - self.decoder = attentions.FFT( - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout) - self.proj = nn.Conv1d(hidden_channels, out_channels, 1) - self.f0_prenet = nn.Conv1d(1, hidden_channels , 3, padding=1) - self.cond = nn.Conv1d(spk_channels, hidden_channels, 1) - - def forward(self, x, norm_f0, x_mask, spk_emb=None): - x = torch.detach(x) - if (spk_emb is not None): - x = x + self.cond(spk_emb) - x += self.f0_prenet(norm_f0) - x = self.prenet(x) * x_mask - x = self.decoder(x * x_mask, x_mask) - x = self.proj(x) * x_mask - return x - - -class SynthesizerTrn(nn.Module): - """ - Synthesizer for Training - """ - - def __init__(self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels, - ssl_dim, - n_speakers, - sampling_rate=44100, - **kwargs): - - super().__init__() - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - self.ssl_dim = ssl_dim - self.emb_g = nn.Embedding(n_speakers, gin_channels) - - self.pre = nn.Conv1d(ssl_dim, hidden_channels, kernel_size=5, padding=2) - - self.enc_p = TextEncoder( - inter_channels, - hidden_channels, - filter_channels=filter_channels, - n_heads=n_heads, - n_layers=n_layers, - kernel_size=kernel_size, - p_dropout=p_dropout - ) - hps = { - "sampling_rate": sampling_rate, - "inter_channels": inter_channels, - "resblock": resblock, - "resblock_kernel_sizes": resblock_kernel_sizes, - "resblock_dilation_sizes": resblock_dilation_sizes, - "upsample_rates": upsample_rates, - "upsample_initial_channel": upsample_initial_channel, - "upsample_kernel_sizes": upsample_kernel_sizes, - "gin_channels": gin_channels, - } - self.dec = Generator(h=hps) - self.enc_q = Encoder(spec_channels, inter_channels, hidden_channels, 5, 1, 16, gin_channels=gin_channels) - self.flow = ResidualCouplingBlock(inter_channels, hidden_channels, 5, 1, 4, gin_channels=gin_channels) - self.f0_decoder = F0Decoder( - 1, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - spk_channels=gin_channels - ) - self.emb_uv = nn.Embedding(2, hidden_channels) - - def forward(self, c, f0, uv, spec, g=None, c_lengths=None, spec_lengths=None): - g = self.emb_g(g).transpose(1,2) - # ssl prenet - x_mask = torch.unsqueeze(commons.sequence_mask(c_lengths, c.size(2)), 1).to(c.dtype) - x = self.pre(c) * x_mask + self.emb_uv(uv.long()).transpose(1,2) - - # f0 predict - lf0 = 2595. * torch.log10(1. + f0.unsqueeze(1) / 700.) / 500 - norm_lf0 = utils.normalize_f0(lf0, x_mask, uv) - pred_lf0 = self.f0_decoder(x, norm_lf0, x_mask, spk_emb=g) - - # encoder - z_ptemp, m_p, logs_p, _ = self.enc_p(x, x_mask, f0=f0_to_coarse(f0)) - z, m_q, logs_q, spec_mask = self.enc_q(spec, spec_lengths, g=g) - - # flow - z_p = self.flow(z, spec_mask, g=g) - z_slice, pitch_slice, ids_slice = commons.rand_slice_segments_with_pitch(z, f0, spec_lengths, self.segment_size) - - # nsf decoder - o = self.dec(z_slice, g=g, f0=pitch_slice) - - return o, ids_slice, spec_mask, (z, z_p, m_p, logs_p, m_q, logs_q), pred_lf0, norm_lf0, lf0 - - def infer(self, c, f0, uv, g=None, noice_scale=0.35, predict_f0=False): - c_lengths = (torch.ones(c.size(0)) * c.size(-1)).to(c.device) - g = self.emb_g(g).transpose(1,2) - x_mask = torch.unsqueeze(commons.sequence_mask(c_lengths, c.size(2)), 1).to(c.dtype) - x = self.pre(c) * x_mask + self.emb_uv(uv.long()).transpose(1,2) - - if predict_f0: - lf0 = 2595. * torch.log10(1. + f0.unsqueeze(1) / 700.) / 500 - norm_lf0 = utils.normalize_f0(lf0, x_mask, uv, random_scale=False) - pred_lf0 = self.f0_decoder(x, norm_lf0, x_mask, spk_emb=g) - f0 = (700 * (torch.pow(10, pred_lf0 * 500 / 2595) - 1)).squeeze(1) - - z_p, m_p, logs_p, c_mask = self.enc_p(x, x_mask, f0=f0_to_coarse(f0), noice_scale=noice_scale) - z = self.flow(z_p, c_mask, g=g, reverse=True) - o = self.dec(z * c_mask, g=g, f0=f0) - return o diff --git a/spaces/MarcusSu1216/XingTong/vdecoder/hifigan/utils.py b/spaces/MarcusSu1216/XingTong/vdecoder/hifigan/utils.py deleted file mode 100644 index 9c93c996d3cc73c30d71c1fc47056e4230f35c0f..0000000000000000000000000000000000000000 --- a/spaces/MarcusSu1216/XingTong/vdecoder/hifigan/utils.py +++ /dev/null @@ -1,68 +0,0 @@ -import glob -import os -import matplotlib -import torch -from torch.nn.utils import weight_norm -# matplotlib.use("Agg") -import matplotlib.pylab as plt - - -def plot_spectrogram(spectrogram): - fig, ax = plt.subplots(figsize=(10, 2)) - im = ax.imshow(spectrogram, aspect="auto", origin="lower", - interpolation='none') - plt.colorbar(im, ax=ax) - - fig.canvas.draw() - plt.close() - - return fig - - -def init_weights(m, mean=0.0, std=0.01): - classname = m.__class__.__name__ - if classname.find("Conv") != -1: - m.weight.data.normal_(mean, std) - - -def apply_weight_norm(m): - classname = m.__class__.__name__ - if classname.find("Conv") != -1: - weight_norm(m) - - -def get_padding(kernel_size, dilation=1): - return int((kernel_size*dilation - dilation)/2) - - -def load_checkpoint(filepath, device): - assert os.path.isfile(filepath) - print("Loading '{}'".format(filepath)) - checkpoint_dict = torch.load(filepath, map_location=device) - print("Complete.") - return checkpoint_dict - - -def save_checkpoint(filepath, obj): - print("Saving checkpoint to {}".format(filepath)) - torch.save(obj, filepath) - print("Complete.") - - -def del_old_checkpoints(cp_dir, prefix, n_models=2): - pattern = os.path.join(cp_dir, prefix + '????????') - cp_list = glob.glob(pattern) # get checkpoint paths - cp_list = sorted(cp_list)# sort by iter - if len(cp_list) > n_models: # if more than n_models models are found - for cp in cp_list[:-n_models]:# delete the oldest models other than lastest n_models - open(cp, 'w').close()# empty file contents - os.unlink(cp)# delete file (move to trash when using Colab) - - -def scan_checkpoint(cp_dir, prefix): - pattern = os.path.join(cp_dir, prefix + '????????') - cp_list = glob.glob(pattern) - if len(cp_list) == 0: - return None - return sorted(cp_list)[-1] - diff --git a/spaces/MeiJuice/CheckGPT/README.md b/spaces/MeiJuice/CheckGPT/README.md deleted file mode 100644 index db90e60ec0dc69b8ae48dcc922f6dcaf1fc9c3dd..0000000000000000000000000000000000000000 --- a/spaces/MeiJuice/CheckGPT/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: CheckGPT -emoji: 🐢 -colorFrom: purple -colorTo: red -sdk: gradio -sdk_version: 3.29.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/configs/_base_/models/ocrnet_r50-d8.py b/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/configs/_base_/models/ocrnet_r50-d8.py deleted file mode 100644 index 615aa3ff703942b6c22b2d6e9642504dd3e41ebd..0000000000000000000000000000000000000000 --- a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/configs/_base_/models/ocrnet_r50-d8.py +++ /dev/null @@ -1,47 +0,0 @@ -# model settings -norm_cfg = dict(type='SyncBN', requires_grad=True) -model = dict( - type='CascadeEncoderDecoder', - num_stages=2, - pretrained='open-mmlab://resnet50_v1c', - backbone=dict( - type='ResNetV1c', - depth=50, - num_stages=4, - out_indices=(0, 1, 2, 3), - dilations=(1, 1, 2, 4), - strides=(1, 2, 1, 1), - norm_cfg=norm_cfg, - norm_eval=False, - style='pytorch', - contract_dilation=True), - decode_head=[ - dict( - type='FCNHead', - in_channels=1024, - in_index=2, - channels=256, - num_convs=1, - concat_input=False, - dropout_ratio=0.1, - num_classes=19, - norm_cfg=norm_cfg, - align_corners=False, - loss_decode=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.4)), - dict( - type='OCRHead', - in_channels=2048, - in_index=3, - channels=512, - ocr_channels=256, - dropout_ratio=0.1, - num_classes=19, - norm_cfg=norm_cfg, - align_corners=False, - loss_decode=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0)) - ], - # model training and testing settings - train_cfg=dict(), - test_cfg=dict(mode='whole')) diff --git a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmcv/runner/optimizer/default_constructor.py b/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmcv/runner/optimizer/default_constructor.py deleted file mode 100644 index 2c0da3503b75441738efe38d70352b55a210a34a..0000000000000000000000000000000000000000 --- a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmcv/runner/optimizer/default_constructor.py +++ /dev/null @@ -1,249 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings - -import torch -from torch.nn import GroupNorm, LayerNorm - -from annotator.uniformer.mmcv.utils import _BatchNorm, _InstanceNorm, build_from_cfg, is_list_of -from annotator.uniformer.mmcv.utils.ext_loader import check_ops_exist -from .builder import OPTIMIZER_BUILDERS, OPTIMIZERS - - -@OPTIMIZER_BUILDERS.register_module() -class DefaultOptimizerConstructor: - """Default constructor for optimizers. - - By default each parameter share the same optimizer settings, and we - provide an argument ``paramwise_cfg`` to specify parameter-wise settings. - It is a dict and may contain the following fields: - - - ``custom_keys`` (dict): Specified parameters-wise settings by keys. If - one of the keys in ``custom_keys`` is a substring of the name of one - parameter, then the setting of the parameter will be specified by - ``custom_keys[key]`` and other setting like ``bias_lr_mult`` etc. will - be ignored. It should be noted that the aforementioned ``key`` is the - longest key that is a substring of the name of the parameter. If there - are multiple matched keys with the same length, then the key with lower - alphabet order will be chosen. - ``custom_keys[key]`` should be a dict and may contain fields ``lr_mult`` - and ``decay_mult``. See Example 2 below. - - ``bias_lr_mult`` (float): It will be multiplied to the learning - rate for all bias parameters (except for those in normalization - layers and offset layers of DCN). - - ``bias_decay_mult`` (float): It will be multiplied to the weight - decay for all bias parameters (except for those in - normalization layers, depthwise conv layers, offset layers of DCN). - - ``norm_decay_mult`` (float): It will be multiplied to the weight - decay for all weight and bias parameters of normalization - layers. - - ``dwconv_decay_mult`` (float): It will be multiplied to the weight - decay for all weight and bias parameters of depthwise conv - layers. - - ``dcn_offset_lr_mult`` (float): It will be multiplied to the learning - rate for parameters of offset layer in the deformable convs - of a model. - - ``bypass_duplicate`` (bool): If true, the duplicate parameters - would not be added into optimizer. Default: False. - - Note: - 1. If the option ``dcn_offset_lr_mult`` is used, the constructor will - override the effect of ``bias_lr_mult`` in the bias of offset - layer. So be careful when using both ``bias_lr_mult`` and - ``dcn_offset_lr_mult``. If you wish to apply both of them to the - offset layer in deformable convs, set ``dcn_offset_lr_mult`` - to the original ``dcn_offset_lr_mult`` * ``bias_lr_mult``. - 2. If the option ``dcn_offset_lr_mult`` is used, the constructor will - apply it to all the DCN layers in the model. So be careful when - the model contains multiple DCN layers in places other than - backbone. - - Args: - model (:obj:`nn.Module`): The model with parameters to be optimized. - optimizer_cfg (dict): The config dict of the optimizer. - Positional fields are - - - `type`: class name of the optimizer. - - Optional fields are - - - any arguments of the corresponding optimizer type, e.g., - lr, weight_decay, momentum, etc. - paramwise_cfg (dict, optional): Parameter-wise options. - - Example 1: - >>> model = torch.nn.modules.Conv1d(1, 1, 1) - >>> optimizer_cfg = dict(type='SGD', lr=0.01, momentum=0.9, - >>> weight_decay=0.0001) - >>> paramwise_cfg = dict(norm_decay_mult=0.) - >>> optim_builder = DefaultOptimizerConstructor( - >>> optimizer_cfg, paramwise_cfg) - >>> optimizer = optim_builder(model) - - Example 2: - >>> # assume model have attribute model.backbone and model.cls_head - >>> optimizer_cfg = dict(type='SGD', lr=0.01, weight_decay=0.95) - >>> paramwise_cfg = dict(custom_keys={ - '.backbone': dict(lr_mult=0.1, decay_mult=0.9)}) - >>> optim_builder = DefaultOptimizerConstructor( - >>> optimizer_cfg, paramwise_cfg) - >>> optimizer = optim_builder(model) - >>> # Then the `lr` and `weight_decay` for model.backbone is - >>> # (0.01 * 0.1, 0.95 * 0.9). `lr` and `weight_decay` for - >>> # model.cls_head is (0.01, 0.95). - """ - - def __init__(self, optimizer_cfg, paramwise_cfg=None): - if not isinstance(optimizer_cfg, dict): - raise TypeError('optimizer_cfg should be a dict', - f'but got {type(optimizer_cfg)}') - self.optimizer_cfg = optimizer_cfg - self.paramwise_cfg = {} if paramwise_cfg is None else paramwise_cfg - self.base_lr = optimizer_cfg.get('lr', None) - self.base_wd = optimizer_cfg.get('weight_decay', None) - self._validate_cfg() - - def _validate_cfg(self): - if not isinstance(self.paramwise_cfg, dict): - raise TypeError('paramwise_cfg should be None or a dict, ' - f'but got {type(self.paramwise_cfg)}') - - if 'custom_keys' in self.paramwise_cfg: - if not isinstance(self.paramwise_cfg['custom_keys'], dict): - raise TypeError( - 'If specified, custom_keys must be a dict, ' - f'but got {type(self.paramwise_cfg["custom_keys"])}') - if self.base_wd is None: - for key in self.paramwise_cfg['custom_keys']: - if 'decay_mult' in self.paramwise_cfg['custom_keys'][key]: - raise ValueError('base_wd should not be None') - - # get base lr and weight decay - # weight_decay must be explicitly specified if mult is specified - if ('bias_decay_mult' in self.paramwise_cfg - or 'norm_decay_mult' in self.paramwise_cfg - or 'dwconv_decay_mult' in self.paramwise_cfg): - if self.base_wd is None: - raise ValueError('base_wd should not be None') - - def _is_in(self, param_group, param_group_list): - assert is_list_of(param_group_list, dict) - param = set(param_group['params']) - param_set = set() - for group in param_group_list: - param_set.update(set(group['params'])) - - return not param.isdisjoint(param_set) - - def add_params(self, params, module, prefix='', is_dcn_module=None): - """Add all parameters of module to the params list. - - The parameters of the given module will be added to the list of param - groups, with specific rules defined by paramwise_cfg. - - Args: - params (list[dict]): A list of param groups, it will be modified - in place. - module (nn.Module): The module to be added. - prefix (str): The prefix of the module - is_dcn_module (int|float|None): If the current module is a - submodule of DCN, `is_dcn_module` will be passed to - control conv_offset layer's learning rate. Defaults to None. - """ - # get param-wise options - custom_keys = self.paramwise_cfg.get('custom_keys', {}) - # first sort with alphabet order and then sort with reversed len of str - sorted_keys = sorted(sorted(custom_keys.keys()), key=len, reverse=True) - - bias_lr_mult = self.paramwise_cfg.get('bias_lr_mult', 1.) - bias_decay_mult = self.paramwise_cfg.get('bias_decay_mult', 1.) - norm_decay_mult = self.paramwise_cfg.get('norm_decay_mult', 1.) - dwconv_decay_mult = self.paramwise_cfg.get('dwconv_decay_mult', 1.) - bypass_duplicate = self.paramwise_cfg.get('bypass_duplicate', False) - dcn_offset_lr_mult = self.paramwise_cfg.get('dcn_offset_lr_mult', 1.) - - # special rules for norm layers and depth-wise conv layers - is_norm = isinstance(module, - (_BatchNorm, _InstanceNorm, GroupNorm, LayerNorm)) - is_dwconv = ( - isinstance(module, torch.nn.Conv2d) - and module.in_channels == module.groups) - - for name, param in module.named_parameters(recurse=False): - param_group = {'params': [param]} - if not param.requires_grad: - params.append(param_group) - continue - if bypass_duplicate and self._is_in(param_group, params): - warnings.warn(f'{prefix} is duplicate. It is skipped since ' - f'bypass_duplicate={bypass_duplicate}') - continue - # if the parameter match one of the custom keys, ignore other rules - is_custom = False - for key in sorted_keys: - if key in f'{prefix}.{name}': - is_custom = True - lr_mult = custom_keys[key].get('lr_mult', 1.) - param_group['lr'] = self.base_lr * lr_mult - if self.base_wd is not None: - decay_mult = custom_keys[key].get('decay_mult', 1.) - param_group['weight_decay'] = self.base_wd * decay_mult - break - - if not is_custom: - # bias_lr_mult affects all bias parameters - # except for norm.bias dcn.conv_offset.bias - if name == 'bias' and not (is_norm or is_dcn_module): - param_group['lr'] = self.base_lr * bias_lr_mult - - if (prefix.find('conv_offset') != -1 and is_dcn_module - and isinstance(module, torch.nn.Conv2d)): - # deal with both dcn_offset's bias & weight - param_group['lr'] = self.base_lr * dcn_offset_lr_mult - - # apply weight decay policies - if self.base_wd is not None: - # norm decay - if is_norm: - param_group[ - 'weight_decay'] = self.base_wd * norm_decay_mult - # depth-wise conv - elif is_dwconv: - param_group[ - 'weight_decay'] = self.base_wd * dwconv_decay_mult - # bias lr and decay - elif name == 'bias' and not is_dcn_module: - # TODO: current bias_decay_mult will have affect on DCN - param_group[ - 'weight_decay'] = self.base_wd * bias_decay_mult - params.append(param_group) - - if check_ops_exist(): - from annotator.uniformer.mmcv.ops import DeformConv2d, ModulatedDeformConv2d - is_dcn_module = isinstance(module, - (DeformConv2d, ModulatedDeformConv2d)) - else: - is_dcn_module = False - for child_name, child_mod in module.named_children(): - child_prefix = f'{prefix}.{child_name}' if prefix else child_name - self.add_params( - params, - child_mod, - prefix=child_prefix, - is_dcn_module=is_dcn_module) - - def __call__(self, model): - if hasattr(model, 'module'): - model = model.module - - optimizer_cfg = self.optimizer_cfg.copy() - # if no paramwise option is specified, just use the global setting - if not self.paramwise_cfg: - optimizer_cfg['params'] = model.parameters() - return build_from_cfg(optimizer_cfg, OPTIMIZERS) - - # set param-wise lr and weight decay recursively - params = [] - self.add_params(params, model) - optimizer_cfg['params'] = params - - return build_from_cfg(optimizer_cfg, OPTIMIZERS) diff --git a/spaces/Mountchicken/MAERec-Gradio/configs/textdet/_base_/pretrain_runtime.py b/spaces/Mountchicken/MAERec-Gradio/configs/textdet/_base_/pretrain_runtime.py deleted file mode 100644 index cb2800d50a570881475035e3b0da9c81e88712d1..0000000000000000000000000000000000000000 --- a/spaces/Mountchicken/MAERec-Gradio/configs/textdet/_base_/pretrain_runtime.py +++ /dev/null @@ -1,14 +0,0 @@ -_base_ = 'default_runtime.py' - -default_hooks = dict( - logger=dict(type='LoggerHook', interval=1000), - checkpoint=dict( - type='CheckpointHook', - interval=10000, - by_epoch=False, - max_keep_ckpts=1), -) - -# Evaluation -val_evaluator = None -test_evaluator = None diff --git a/spaces/Mountchicken/MAERec-Gradio/configs/textdet/maskrcnn/mask-rcnn_resnet50_fpn_160e_icdar2015.py b/spaces/Mountchicken/MAERec-Gradio/configs/textdet/maskrcnn/mask-rcnn_resnet50_fpn_160e_icdar2015.py deleted file mode 100644 index 41509ac17785bcfb93726c16139dd11bddb6020b..0000000000000000000000000000000000000000 --- a/spaces/Mountchicken/MAERec-Gradio/configs/textdet/maskrcnn/mask-rcnn_resnet50_fpn_160e_icdar2015.py +++ /dev/null @@ -1,39 +0,0 @@ -_base_ = [ - '_base_mask-rcnn_resnet50_fpn.py', - '../_base_/datasets/icdar2015.py', - '../_base_/default_runtime.py', - '../_base_/schedules/schedule_sgd_base.py', -] - -# optimizer -optim_wrapper = dict(optimizer=dict(lr=0.08)) -train_cfg = dict(max_epochs=160) -# learning policy -param_scheduler = [ - dict(type='LinearLR', end=500, start_factor=0.001, by_epoch=False), - dict(type='MultiStepLR', milestones=[80, 128], end=160), -] - -# dataset settings -icdar2015_textdet_train = _base_.icdar2015_textdet_train -icdar2015_textdet_test = _base_.icdar2015_textdet_test -icdar2015_textdet_train.pipeline = _base_.train_pipeline -icdar2015_textdet_test.pipeline = _base_.test_pipeline - -train_dataloader = dict( - batch_size=8, - num_workers=4, - persistent_workers=True, - sampler=dict(type='DefaultSampler', shuffle=True), - dataset=icdar2015_textdet_train) - -val_dataloader = dict( - batch_size=1, - num_workers=1, - persistent_workers=True, - sampler=dict(type='DefaultSampler', shuffle=False), - dataset=icdar2015_textdet_test) - -test_dataloader = val_dataloader - -auto_scale_lr = dict(base_batch_size=8) diff --git a/spaces/MrZak/LearnUp-4.1/app.py b/spaces/MrZak/LearnUp-4.1/app.py deleted file mode 100644 index dcdfa83d2f3096ce9888408ecd8fc75a0aa60641..0000000000000000000000000000000000000000 --- a/spaces/MrZak/LearnUp-4.1/app.py +++ /dev/null @@ -1,90 +0,0 @@ -import os -import openai -import gradio as gr - -# Set your OpenAI API key -openai.api_key = "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" - -start_sequence = "\nAI:" -restart_sequence = "\nHuman: " - -prompt = "Ask Learny Anything" - -def openai_create(prompt): - response = openai.Completion.create( - engine="text-davinci-003", - prompt=prompt, - temperature=0.9, - max_tokens=150, - top_p=1, - frequency_penalty=0, - presence_penalty=0.6, - stop=[" Human:", " AI:"] - ) - print(response) - return response.choices[0].text.strip() - -def format_chat_bubble(text, is_user=False): - if is_user: - bubble = f"You: {text}" - class_name = "user-bubble" - else: - bubble = f"Learny: {text}" - class_name = "bot-bubble" - return f'
{bubble}
' - -def chatgpt_clone(input, history): - history = history or [] - s = list(sum(history, ())) - s.append(input) - inp = ' '.join(s) - output = openai_create(inp) - history.append((input, output)) - chat_history = [format_chat_bubble(item[0], is_user=True) for item in history[:-1]] - chat_history.append(format_chat_bubble(history[-1][1])) # Display the last response as the bot's message - return "
".join(chat_history) - - -input_text = gr.inputs.Textbox(placeholder=prompt) -output_text = gr.outputs.HTML() - -interface = gr.Interface( - fn=chatgpt_clone, - inputs=input_text, - outputs=output_text, - theme="compact", - examples=[ - ["What is the capital of France?"], - ["Who is the president of the United States?"], - ] -) - -# Apply custom CSS style to the interface -custom_css = """ -.user-bubble { - background-color: #EDF2FC; - color: black; - padding: 10px; - border-radius: 20px; - margin-bottom: 10px; - font-family: Arial, sans-serif; -} - -.bot-bubble { - background-color: #9FB6CD; - color: black; - padding: 10px; - border-radius: 20px; - margin-bottom: 10px; - font-family: Arial, sans-serif; -} - -.gradio-interface .gradio-submit-button { - background-color: grey; - color: white; -} -""" - -interface.css = custom_css - -interface.launch() diff --git a/spaces/MuGeminorum/insecta/khandy/list_utils.py b/spaces/MuGeminorum/insecta/khandy/list_utils.py deleted file mode 100644 index 04b9080f0a8a05c645c16a7e5ba3132260907b89..0000000000000000000000000000000000000000 --- a/spaces/MuGeminorum/insecta/khandy/list_utils.py +++ /dev/null @@ -1,68 +0,0 @@ -import random -import itertools - - -def to_list(obj): - if obj is None: - return None - elif hasattr(obj, '__iter__') and not isinstance(obj, str): - try: - return list(obj) - except: - return [obj] - else: - return [obj] - - -def convert_lists_to_record(*list_objs, delimiter=None): - assert len(list_objs) >= 1, 'list_objs length must >= 1.' - delimiter = delimiter or ',' - - assert isinstance(list_objs[0], (tuple, list)) - number = len(list_objs[0]) - for item in list_objs[1:]: - assert isinstance(item, (tuple, list)) - assert len(item) == number, '{} != {}'.format(len(item), number) - - records = [] - record_list = zip(*list_objs) - for record in record_list: - record_str = [str(item) for item in record] - records.append(delimiter.join(record_str)) - return records - - -def shuffle_table(*table): - """ - Notes: - table can be seen as list of list which have equal items. - """ - shuffled_list = list(zip(*table)) - random.shuffle(shuffled_list) - tuple_list = zip(*shuffled_list) - return [list(item) for item in tuple_list] - - -def transpose_table(table): - """ - Notes: - table can be seen as list of list which have equal items. - """ - m, n = len(table), len(table[0]) - return [[table[i][j] for i in range(m)] for j in range(n)] - - -def concat_list(in_list): - """Concatenate a list of list into a single list. - - Args: - in_list (list): The list of list to be merged. - - Returns: - list: The concatenated flat list. - - References: - mmcv.concat_list - """ - return list(itertools.chain(*in_list)) - \ No newline at end of file diff --git a/spaces/NATSpeech/PortaSpeech/modules/commons/normalizing_flow/glow_modules.py b/spaces/NATSpeech/PortaSpeech/modules/commons/normalizing_flow/glow_modules.py deleted file mode 100644 index c589af0f2eba2b154317912f9ad01a4163b3fd6a..0000000000000000000000000000000000000000 --- a/spaces/NATSpeech/PortaSpeech/modules/commons/normalizing_flow/glow_modules.py +++ /dev/null @@ -1,362 +0,0 @@ -import scipy -from torch.nn import functional as F -import torch -from torch import nn -import numpy as np -from modules.commons.wavenet import WN -from modules.tts.glow import utils - - -class ActNorm(nn.Module): - def __init__(self, channels, ddi=False, **kwargs): - super().__init__() - self.channels = channels - self.initialized = not ddi - - self.logs = nn.Parameter(torch.zeros(1, channels, 1)) - self.bias = nn.Parameter(torch.zeros(1, channels, 1)) - - def forward(self, x, x_mask=None, reverse=False, **kwargs): - if x_mask is None: - x_mask = torch.ones(x.size(0), 1, x.size(2)).to(device=x.device, dtype=x.dtype) - x_len = torch.sum(x_mask, [1, 2]) - if not self.initialized: - self.initialize(x, x_mask) - self.initialized = True - - if reverse: - z = (x - self.bias) * torch.exp(-self.logs) * x_mask - logdet = torch.sum(-self.logs) * x_len - else: - z = (self.bias + torch.exp(self.logs) * x) * x_mask - logdet = torch.sum(self.logs) * x_len # [b] - return z, logdet - - def store_inverse(self): - pass - - def set_ddi(self, ddi): - self.initialized = not ddi - - def initialize(self, x, x_mask): - with torch.no_grad(): - denom = torch.sum(x_mask, [0, 2]) - m = torch.sum(x * x_mask, [0, 2]) / denom - m_sq = torch.sum(x * x * x_mask, [0, 2]) / denom - v = m_sq - (m ** 2) - logs = 0.5 * torch.log(torch.clamp_min(v, 1e-6)) - - bias_init = (-m * torch.exp(-logs)).view(*self.bias.shape).to(dtype=self.bias.dtype) - logs_init = (-logs).view(*self.logs.shape).to(dtype=self.logs.dtype) - - self.bias.data.copy_(bias_init) - self.logs.data.copy_(logs_init) - - -class InvConvNear(nn.Module): - def __init__(self, channels, n_split=4, no_jacobian=False, lu=True, n_sqz=2, **kwargs): - super().__init__() - assert (n_split % 2 == 0) - self.channels = channels - self.n_split = n_split - self.n_sqz = n_sqz - self.no_jacobian = no_jacobian - - w_init = torch.qr(torch.FloatTensor(self.n_split, self.n_split).normal_())[0] - if torch.det(w_init) < 0: - w_init[:, 0] = -1 * w_init[:, 0] - self.lu = lu - if lu: - # LU decomposition can slightly speed up the inverse - np_p, np_l, np_u = scipy.linalg.lu(w_init) - np_s = np.diag(np_u) - np_sign_s = np.sign(np_s) - np_log_s = np.log(np.abs(np_s)) - np_u = np.triu(np_u, k=1) - l_mask = np.tril(np.ones(w_init.shape, dtype=float), -1) - eye = np.eye(*w_init.shape, dtype=float) - - self.register_buffer('p', torch.Tensor(np_p.astype(float))) - self.register_buffer('sign_s', torch.Tensor(np_sign_s.astype(float))) - self.l = nn.Parameter(torch.Tensor(np_l.astype(float)), requires_grad=True) - self.log_s = nn.Parameter(torch.Tensor(np_log_s.astype(float)), requires_grad=True) - self.u = nn.Parameter(torch.Tensor(np_u.astype(float)), requires_grad=True) - self.register_buffer('l_mask', torch.Tensor(l_mask)) - self.register_buffer('eye', torch.Tensor(eye)) - else: - self.weight = nn.Parameter(w_init) - - def forward(self, x, x_mask=None, reverse=False, **kwargs): - b, c, t = x.size() - assert (c % self.n_split == 0) - if x_mask is None: - x_mask = 1 - x_len = torch.ones((b,), dtype=x.dtype, device=x.device) * t - else: - x_len = torch.sum(x_mask, [1, 2]) - - x = x.view(b, self.n_sqz, c // self.n_split, self.n_split // self.n_sqz, t) - x = x.permute(0, 1, 3, 2, 4).contiguous().view(b, self.n_split, c // self.n_split, t) - - if self.lu: - self.weight, log_s = self._get_weight() - logdet = log_s.sum() - logdet = logdet * (c / self.n_split) * x_len - else: - logdet = torch.logdet(self.weight) * (c / self.n_split) * x_len # [b] - - if reverse: - if hasattr(self, "weight_inv"): - weight = self.weight_inv - else: - weight = torch.inverse(self.weight.float()).to(dtype=self.weight.dtype) - logdet = -logdet - else: - weight = self.weight - if self.no_jacobian: - logdet = 0 - - weight = weight.view(self.n_split, self.n_split, 1, 1) - z = F.conv2d(x, weight) - - z = z.view(b, self.n_sqz, self.n_split // self.n_sqz, c // self.n_split, t) - z = z.permute(0, 1, 3, 2, 4).contiguous().view(b, c, t) * x_mask - return z, logdet - - def _get_weight(self): - l, log_s, u = self.l, self.log_s, self.u - l = l * self.l_mask + self.eye - u = u * self.l_mask.transpose(0, 1).contiguous() + torch.diag(self.sign_s * torch.exp(log_s)) - weight = torch.matmul(self.p, torch.matmul(l, u)) - return weight, log_s - - def store_inverse(self): - weight, _ = self._get_weight() - self.weight_inv = torch.inverse(weight.float()).to(next(self.parameters()).device) - - -class InvConv(nn.Module): - def __init__(self, channels, no_jacobian=False, lu=True, **kwargs): - super().__init__() - w_shape = [channels, channels] - w_init = np.linalg.qr(np.random.randn(*w_shape))[0].astype(float) - LU_decomposed = lu - if not LU_decomposed: - # Sample a random orthogonal matrix: - self.register_parameter("weight", nn.Parameter(torch.Tensor(w_init))) - else: - np_p, np_l, np_u = scipy.linalg.lu(w_init) - np_s = np.diag(np_u) - np_sign_s = np.sign(np_s) - np_log_s = np.log(np.abs(np_s)) - np_u = np.triu(np_u, k=1) - l_mask = np.tril(np.ones(w_shape, dtype=float), -1) - eye = np.eye(*w_shape, dtype=float) - - self.register_buffer('p', torch.Tensor(np_p.astype(float))) - self.register_buffer('sign_s', torch.Tensor(np_sign_s.astype(float))) - self.l = nn.Parameter(torch.Tensor(np_l.astype(float))) - self.log_s = nn.Parameter(torch.Tensor(np_log_s.astype(float))) - self.u = nn.Parameter(torch.Tensor(np_u.astype(float))) - self.l_mask = torch.Tensor(l_mask) - self.eye = torch.Tensor(eye) - self.w_shape = w_shape - self.LU = LU_decomposed - self.weight = None - - def get_weight(self, device, reverse): - w_shape = self.w_shape - self.p = self.p.to(device) - self.sign_s = self.sign_s.to(device) - self.l_mask = self.l_mask.to(device) - self.eye = self.eye.to(device) - l = self.l * self.l_mask + self.eye - u = self.u * self.l_mask.transpose(0, 1).contiguous() + torch.diag(self.sign_s * torch.exp(self.log_s)) - dlogdet = self.log_s.sum() - if not reverse: - w = torch.matmul(self.p, torch.matmul(l, u)) - else: - l = torch.inverse(l.double()).float() - u = torch.inverse(u.double()).float() - w = torch.matmul(u, torch.matmul(l, self.p.inverse())) - return w.view(w_shape[0], w_shape[1], 1), dlogdet - - def forward(self, x, x_mask=None, reverse=False, **kwargs): - """ - log-det = log|abs(|W|)| * pixels - """ - b, c, t = x.size() - if x_mask is None: - x_len = torch.ones((b,), dtype=x.dtype, device=x.device) * t - else: - x_len = torch.sum(x_mask, [1, 2]) - logdet = 0 - if not reverse: - weight, dlogdet = self.get_weight(x.device, reverse) - z = F.conv1d(x, weight) - if logdet is not None: - logdet = logdet + dlogdet * x_len - return z, logdet - else: - if self.weight is None: - weight, dlogdet = self.get_weight(x.device, reverse) - else: - weight, dlogdet = self.weight, self.dlogdet - z = F.conv1d(x, weight) - if logdet is not None: - logdet = logdet - dlogdet * x_len - return z, logdet - - def store_inverse(self): - self.weight, self.dlogdet = self.get_weight('cuda', reverse=True) - - -class CouplingBlock(nn.Module): - def __init__(self, in_channels, hidden_channels, kernel_size, dilation_rate, n_layers, - gin_channels=0, p_dropout=0, sigmoid_scale=False, wn=None): - super().__init__() - self.in_channels = in_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - self.p_dropout = p_dropout - self.sigmoid_scale = sigmoid_scale - - start = torch.nn.Conv1d(in_channels // 2, hidden_channels, 1) - start = torch.nn.utils.weight_norm(start) - self.start = start - # Initializing last layer to 0 makes the affine coupling layers - # do nothing at first. This helps with training stability - end = torch.nn.Conv1d(hidden_channels, in_channels, 1) - end.weight.data.zero_() - end.bias.data.zero_() - self.end = end - self.wn = WN(hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels, p_dropout) - if wn is not None: - self.wn.in_layers = wn.in_layers - self.wn.res_skip_layers = wn.res_skip_layers - - def forward(self, x, x_mask=None, reverse=False, g=None, **kwargs): - if x_mask is None: - x_mask = 1 - x_0, x_1 = x[:, :self.in_channels // 2], x[:, self.in_channels // 2:] - - x = self.start(x_0) * x_mask - x = self.wn(x, x_mask, g) - out = self.end(x) - - z_0 = x_0 - m = out[:, :self.in_channels // 2, :] - logs = out[:, self.in_channels // 2:, :] - if self.sigmoid_scale: - logs = torch.log(1e-6 + torch.sigmoid(logs + 2)) - if reverse: - z_1 = (x_1 - m) * torch.exp(-logs) * x_mask - logdet = torch.sum(-logs * x_mask, [1, 2]) - else: - z_1 = (m + torch.exp(logs) * x_1) * x_mask - logdet = torch.sum(logs * x_mask, [1, 2]) - z = torch.cat([z_0, z_1], 1) - return z, logdet - - def store_inverse(self): - self.wn.remove_weight_norm() - - -class Glow(nn.Module): - def __init__(self, - in_channels, - hidden_channels, - kernel_size, - dilation_rate, - n_blocks, - n_layers, - p_dropout=0., - n_split=4, - n_sqz=2, - sigmoid_scale=False, - gin_channels=0, - inv_conv_type='near', - share_cond_layers=False, - share_wn_layers=0, - ): - super().__init__() - - self.in_channels = in_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_blocks = n_blocks - self.n_layers = n_layers - self.p_dropout = p_dropout - self.n_split = n_split - self.n_sqz = n_sqz - self.sigmoid_scale = sigmoid_scale - self.gin_channels = gin_channels - self.share_cond_layers = share_cond_layers - if gin_channels != 0 and share_cond_layers: - cond_layer = torch.nn.Conv1d(gin_channels * n_sqz, 2 * hidden_channels * n_layers, 1) - self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name='weight') - wn = None - self.flows = nn.ModuleList() - for b in range(n_blocks): - self.flows.append(ActNorm(channels=in_channels * n_sqz)) - if inv_conv_type == 'near': - self.flows.append(InvConvNear(channels=in_channels * n_sqz, n_split=n_split, n_sqz=n_sqz)) - if inv_conv_type == 'invconv': - self.flows.append(InvConv(channels=in_channels * n_sqz)) - if share_wn_layers > 0: - if b % share_wn_layers == 0: - wn = WN(hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels * n_sqz, - p_dropout, share_cond_layers) - self.flows.append( - CouplingBlock( - in_channels * n_sqz, - hidden_channels, - kernel_size=kernel_size, - dilation_rate=dilation_rate, - n_layers=n_layers, - gin_channels=gin_channels * n_sqz, - p_dropout=p_dropout, - sigmoid_scale=sigmoid_scale, - wn=wn - )) - - def forward(self, x, x_mask=None, g=None, reverse=False, return_hiddens=False): - logdet_tot = 0 - if not reverse: - flows = self.flows - else: - flows = reversed(self.flows) - if return_hiddens: - hs = [] - if self.n_sqz > 1: - x, x_mask_ = utils.squeeze(x, x_mask, self.n_sqz) - if g is not None: - g, _ = utils.squeeze(g, x_mask, self.n_sqz) - x_mask = x_mask_ - if self.share_cond_layers and g is not None: - g = self.cond_layer(g) - for f in flows: - x, logdet = f(x, x_mask, g=g, reverse=reverse) - if return_hiddens: - hs.append(x) - logdet_tot += logdet - if self.n_sqz > 1: - x, x_mask = utils.unsqueeze(x, x_mask, self.n_sqz) - if return_hiddens: - return x, logdet_tot, hs - return x, logdet_tot - - def store_inverse(self): - def remove_weight_norm(m): - try: - nn.utils.remove_weight_norm(m) - except ValueError: # this module didn't have weight norm - return - - self.apply(remove_weight_norm) - for f in self.flows: - f.store_inverse() diff --git a/spaces/Nixic/rvc-models/config.py b/spaces/Nixic/rvc-models/config.py deleted file mode 100644 index c0c16e0017efbcaf250cb539a1d0edb4e83575e4..0000000000000000000000000000000000000000 --- a/spaces/Nixic/rvc-models/config.py +++ /dev/null @@ -1,88 +0,0 @@ -########################硬件参数######################## - -# 填写cuda:x, cpu 或 mps, x指代第几张卡,只支持 N卡 / Apple Silicon 加速 -device = "cuda:0" - -# 9-10-20-30-40系显卡无脑True,不影响质量,>=20显卡开启有加速 -is_half = True - -# 默认0用上所有线程,写数字限制CPU资源使用 -n_cpu = 0 - -########################硬件参数######################## - - -##################下为参数处理逻辑,勿动################## - -########################命令行参数######################## -import argparse - -parser = argparse.ArgumentParser() -parser.add_argument("--port", type=int, default=7865, help="Listen port") -parser.add_argument("--pycmd", type=str, default="python", help="Python command") -parser.add_argument("--colab", action="store_true", help="Launch in colab") -parser.add_argument( - "--noparallel", action="store_true", help="Disable parallel processing" -) -parser.add_argument( - "--noautoopen", action="store_true", help="Do not open in browser automatically" -) -cmd_opts, unknown = parser.parse_known_args() - -python_cmd = cmd_opts.pycmd -listen_port = cmd_opts.port -iscolab = cmd_opts.colab -noparallel = cmd_opts.noparallel -noautoopen = cmd_opts.noautoopen -########################命令行参数######################## - -import sys -import torch - - -# has_mps is only available in nightly pytorch (for now) and MasOS 12.3+. -# check `getattr` and try it for compatibility -def has_mps() -> bool: - if sys.platform != "darwin": - return False - else: - if not getattr(torch, "has_mps", False): - return False - try: - torch.zeros(1).to(torch.device("mps")) - return True - except Exception: - return False - - -if not torch.cuda.is_available(): - if has_mps(): - print("没有发现支持的N卡, 使用MPS进行推理") - device = "mps" - else: - print("没有发现支持的N卡, 使用CPU进行推理") - device = "cpu" - is_half = False - -if device not in ["cpu", "mps"]: - gpu_name = torch.cuda.get_device_name(int(device.split(":")[-1])) - if "16" in gpu_name or "MX" in gpu_name: - print("16系显卡/MX系显卡强制单精度") - is_half = False - -from multiprocessing import cpu_count - -if n_cpu == 0: - n_cpu = cpu_count() -if is_half: - # 6G显存配置 - x_pad = 3 - x_query = 10 - x_center = 60 - x_max = 65 -else: - # 5G显存配置 - x_pad = 1 - x_query = 6 - x_center = 38 - x_max = 41 diff --git a/spaces/Nunchakuka/FrenchAnonymizer/speaker_encoder/data_objects/random_cycler.py b/spaces/Nunchakuka/FrenchAnonymizer/speaker_encoder/data_objects/random_cycler.py deleted file mode 100644 index c405db6b27f46d874d8feb37e3f9c1e12c251109..0000000000000000000000000000000000000000 --- a/spaces/Nunchakuka/FrenchAnonymizer/speaker_encoder/data_objects/random_cycler.py +++ /dev/null @@ -1,37 +0,0 @@ -import random - -class RandomCycler: - """ - Creates an internal copy of a sequence and allows access to its items in a constrained random - order. For a source sequence of n items and one or several consecutive queries of a total - of m items, the following guarantees hold (one implies the other): - - Each item will be returned between m // n and ((m - 1) // n) + 1 times. - - Between two appearances of the same item, there may be at most 2 * (n - 1) other items. - """ - - def __init__(self, source): - if len(source) == 0: - raise Exception("Can't create RandomCycler from an empty collection") - self.all_items = list(source) - self.next_items = [] - - def sample(self, count: int): - shuffle = lambda l: random.sample(l, len(l)) - - out = [] - while count > 0: - if count >= len(self.all_items): - out.extend(shuffle(list(self.all_items))) - count -= len(self.all_items) - continue - n = min(count, len(self.next_items)) - out.extend(self.next_items[:n]) - count -= n - self.next_items = self.next_items[n:] - if len(self.next_items) == 0: - self.next_items = shuffle(list(self.all_items)) - return out - - def __next__(self): - return self.sample(1)[0] - diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/textless_nlp/gslm/speech2unit/clustering/utils.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/textless_nlp/gslm/speech2unit/clustering/utils.py deleted file mode 100644 index cf08d1fe4b470477b724aa8d770d91c0cac35a0e..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/textless_nlp/gslm/speech2unit/clustering/utils.py +++ /dev/null @@ -1,20 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from typing import List, Tuple - - -def get_audio_files(manifest_path: str) -> Tuple[str, List[str], List[int]]: - fnames, sizes = [], [] - with open(manifest_path, "r") as f: - root_dir = f.readline().strip() - for line in f: - items = line.strip().split("\t") - assert ( - len(items) == 2 - ), f"File must have two columns separated by tab. Got {line}" - fnames.append(items[0]) - sizes.append(int(items[1])) - return root_dir, fnames, sizes diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/latent_depth/latent_depth_src/multilingual_translation_latent_depth.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/latent_depth/latent_depth_src/multilingual_translation_latent_depth.py deleted file mode 100644 index 8cc2a7174b765b7ad8808489196e12082a91a2d7..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/latent_depth/latent_depth_src/multilingual_translation_latent_depth.py +++ /dev/null @@ -1,195 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from fairseq.tasks import register_task -from fairseq.tasks.multilingual_translation import MultilingualTranslationTask -from fairseq.utils import safe_hasattr - -from .loss.latent_depth import LatentLayersKLLoss, LatentLayersSparsityLoss - - -@register_task("multilingual_translation_latent_depth") -class MultilingualTranslationTaskLatentDepth(MultilingualTranslationTask): - """A task for multiple translation with latent depth. - - See `"Deep Transformer with Latent Depth" - (Li et al., 2020) `_. - """ - - @staticmethod - def add_args(parser): - """Add task-specific arguments to the parser.""" - # fmt: off - MultilingualTranslationTask.add_args(parser) - parser.add_argument('--encoder-latent-layer', action='store_true', help='latent layer selection in encoder') - parser.add_argument('--decoder-latent-layer', action='store_true', help='latent layer selection in decoder') - parser.add_argument('--target-layers', default=-1, type=int, - help='number of effective layers to learn; -1 means no constraint') - parser.add_argument('--sparsity-weight', default=0.0, type=float, - help='weight for sparsity loss') - parser.add_argument('--share-weight', default=0.0, type=float, - help='weight for sharing loss') - parser.add_argument('--soft-update', default=1, type=int, - help='number of updates with soft sampling') - parser.add_argument('--anneal-updates', default=1, type=int, - help='number of updates to anneal the KL loss weight') - parser.add_argument('--prior', default="uniform", type=str, - help='prior used for computing KL loss') - # fmt: on - - def __init__(self, args, dicts, training): - super().__init__(args, dicts, training) - self.src_langs, self.tgt_langs = zip( - *[(lang.split("-")[0], lang.split("-")[1]) for lang in args.lang_pairs] - ) - if self.training and self.encoder_latent_layer: - assert self.args.share_encoders - if self.training and self.decoder_latent_layer: - assert self.args.share_decoders - if training or self.encoder_latent_layer or self.decoder_latent_layer: - self.lang_pairs = args.lang_pairs - else: - self.lang_pairs = ["{}-{}".format(args.source_lang, args.target_lang)] - self.eval_lang_pairs = self.lang_pairs - self.model_lang_pairs = self.lang_pairs - if self.training and (self.encoder_latent_layer or self.decoder_latent_layer): - self.kl_loss = LatentLayersKLLoss(self.args) - self.sparsity_loss = LatentLayersSparsityLoss(self.args) - - def _per_lang_pair_train_loss( - self, lang_pair, model, update_num, criterion, sample, optimizer, ignore_grad - ): - src, tgt = lang_pair.split("-") - if self.encoder_latent_layer: - src_lang_idx = self.src_lang_idx_dict[src] - model.models[lang_pair].encoder.set_lang_idx(src_lang_idx) - model.models[lang_pair].encoder.layer_select.hard_select = ( - update_num > self.args.soft_update - ) - if self.decoder_latent_layer: - tgt_lang_idx = self.tgt_lang_idx_dict[tgt] - model.models[lang_pair].decoder.set_lang_idx(tgt_lang_idx) - model.models[lang_pair].decoder.layer_select.hard_select = ( - update_num > self.args.soft_update - ) - - loss, sample_size, logging_output = criterion( - model.models[lang_pair], sample[lang_pair] - ) - if self.encoder_latent_layer: - none_samples = sum( - 1 if x is None else 0 - for x in model.models[lang_pair].encoder.layer_select.layer_samples - ) - if none_samples == 0 or self.args.prior != "agged_posterior": - loss += self.kl_loss( - model.models[lang_pair].encoder.layer_select.layer_samples, - src_lang_idx, - update_num, - sample_size, - ) - if self.decoder_latent_layer: - none_samples = sum( - 1 if x is None else 0 - for x in model.models[lang_pair].decoder.layer_select.layer_samples - ) - if none_samples == 0 or self.args.prior != "agged_posterior": - loss += self.kl_loss( - model.models[lang_pair].decoder.layer_select.layer_samples, - tgt_lang_idx, - update_num, - sample_size, - ) - if ignore_grad: - loss *= 0 - - if hasattr(self, "sparsity_loss") and self.sparsity_loss.is_valid(update_num): - # need to retain the graph if sparsity loss needs to be added - loss.backward(retain_graph=True) - else: - optimizer.backward(loss) - - return loss, sample_size, logging_output - - def train_step( - self, sample, model, criterion, optimizer, update_num, ignore_grad=False - ): - agg_loss, agg_sample_size, agg_logging_output = super().train_step( - sample, model, criterion, optimizer, update_num, ignore_grad - ) - # compute auxiliary loss from layere sparsity, based on all samples from all languages - if hasattr(self, "sparsity_loss") and self.sparsity_loss.is_valid(update_num): - sparsity_loss = 0 - if self.encoder_latent_layer: - sparsity_loss += self.sparsity_loss( - next( - iter(model.models.values()) - ).encoder.layer_select.layer_samples, - update_num, - agg_sample_size, - ) - if self.decoder_latent_layer: - sparsity_loss += self.sparsity_loss( - next( - iter(model.models.values()) - ).decoder.layer_select.layer_samples, - update_num, - agg_sample_size, - ) - if sparsity_loss > 0: - optimizer.backward(sparsity_loss) - return agg_loss, agg_sample_size, agg_logging_output - - def _per_lang_pair_valid_loss(self, lang_pair, model, criterion, sample): - src, tgt = lang_pair.split("-") - if self.encoder_latent_layer: - src_lang_idx = self.src_lang_idx_dict[src] - model.models[lang_pair].encoder.set_lang_idx(src_lang_idx) - if self.decoder_latent_layer: - tgt_lang_idx = self.tgt_lang_idx_dict[tgt] - model.models[lang_pair].decoder.set_lang_idx(tgt_lang_idx) - loss, sample_size, logging_output = criterion( - model.models[lang_pair], sample[lang_pair] - ) - return loss, sample_size, logging_output - - def inference_step( - self, generator, models, sample, prefix_tokens=None, constraints=None - ): - if self.encoder_latent_layer or self.decoder_latent_layer: - for model in models: - if self.encoder_latent_layer: - assert model.encoder.layer_select is not None - src_lang_idx = self.src_lang_idx_dict[self.args.source_lang] - model.encoder.set_lang_idx(src_lang_idx) - if self.decoder_latent_layer: - assert model.decoder.layer_select is not None - tgt_lang_idx = self.tgt_lang_idx_dict[self.args.target_lang] - model.decoder.set_lang_idx(tgt_lang_idx) - return super().inference_step( - generator, models, sample, prefix_tokens, constraints - ) - - @property - def encoder_latent_layer(self): - return ( - safe_hasattr(self.args, "encoder_latent_layer") - and self.args.encoder_latent_layer - ) - - @property - def decoder_latent_layer(self): - return ( - safe_hasattr(self.args, "decoder_latent_layer") - and self.args.decoder_latent_layer - ) - - @property - def src_lang_idx_dict(self): - return {lang: lang_idx for lang_idx, lang in enumerate(self.src_langs)} - - @property - def tgt_lang_idx_dict(self): - return {lang: lang_idx for lang_idx, lang in enumerate(self.tgt_langs)} diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/speech_synthesis/generate_waveform.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/speech_synthesis/generate_waveform.py deleted file mode 100644 index bfc2ef8eb3d91366caf7609d75aa1795ab0ed8f9..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/speech_synthesis/generate_waveform.py +++ /dev/null @@ -1,191 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - - -import logging -import matplotlib.pyplot as plt -import numpy as np -from pathlib import Path -import soundfile as sf -import sys -import torch -import torchaudio - -from fairseq import checkpoint_utils, options, tasks, utils -from fairseq.logging import progress_bar -from fairseq.tasks.text_to_speech import plot_tts_output -from fairseq.data.audio.text_to_speech_dataset import TextToSpeechDataset - - -logging.basicConfig() -logging.root.setLevel(logging.INFO) -logging.basicConfig(level=logging.INFO) -logger = logging.getLogger(__name__) - - -def make_parser(): - parser = options.get_speech_generation_parser() - parser.add_argument("--dump-features", action="store_true") - parser.add_argument("--dump-waveforms", action="store_true") - parser.add_argument("--dump-attentions", action="store_true") - parser.add_argument("--dump-eos-probs", action="store_true") - parser.add_argument("--dump-plots", action="store_true") - parser.add_argument("--dump-target", action="store_true") - parser.add_argument("--output-sample-rate", default=22050, type=int) - parser.add_argument("--teacher-forcing", action="store_true") - parser.add_argument( - "--audio-format", type=str, default="wav", choices=["wav", "flac"] - ) - return parser - - -def postprocess_results( - dataset: TextToSpeechDataset, sample, hypos, resample_fn, dump_target -): - def to_np(x): - return None if x is None else x.detach().cpu().numpy() - - sample_ids = [dataset.ids[i] for i in sample["id"].tolist()] - texts = sample["src_texts"] - attns = [to_np(hypo["attn"]) for hypo in hypos] - eos_probs = [to_np(hypo.get("eos_prob", None)) for hypo in hypos] - feat_preds = [to_np(hypo["feature"]) for hypo in hypos] - wave_preds = [to_np(resample_fn(h["waveform"])) for h in hypos] - if dump_target: - feat_targs = [to_np(hypo["targ_feature"]) for hypo in hypos] - wave_targs = [to_np(resample_fn(h["targ_waveform"])) for h in hypos] - else: - feat_targs = [None for _ in hypos] - wave_targs = [None for _ in hypos] - - return zip(sample_ids, texts, attns, eos_probs, feat_preds, wave_preds, - feat_targs, wave_targs) - - -def dump_result( - is_na_model, - args, - vocoder, - sample_id, - text, - attn, - eos_prob, - feat_pred, - wave_pred, - feat_targ, - wave_targ, -): - sample_rate = args.output_sample_rate - out_root = Path(args.results_path) - if args.dump_features: - feat_dir = out_root / "feat" - feat_dir.mkdir(exist_ok=True, parents=True) - np.save(feat_dir / f"{sample_id}.npy", feat_pred) - if args.dump_target: - feat_tgt_dir = out_root / "feat_tgt" - feat_tgt_dir.mkdir(exist_ok=True, parents=True) - np.save(feat_tgt_dir / f"{sample_id}.npy", feat_targ) - if args.dump_attentions: - attn_dir = out_root / "attn" - attn_dir.mkdir(exist_ok=True, parents=True) - np.save(attn_dir / f"{sample_id}.npy", attn.numpy()) - if args.dump_eos_probs and not is_na_model: - eos_dir = out_root / "eos" - eos_dir.mkdir(exist_ok=True, parents=True) - np.save(eos_dir / f"{sample_id}.npy", eos_prob) - - if args.dump_plots: - images = [feat_pred.T] if is_na_model else [feat_pred.T, attn] - names = ["output"] if is_na_model else ["output", "alignment"] - if feat_targ is not None: - images = [feat_targ.T] + images - names = [f"target (idx={sample_id})"] + names - if is_na_model: - plot_tts_output(images, names, attn, "alignment", suptitle=text) - else: - plot_tts_output(images, names, eos_prob, "eos prob", suptitle=text) - plot_dir = out_root / "plot" - plot_dir.mkdir(exist_ok=True, parents=True) - plt.savefig(plot_dir / f"{sample_id}.png") - plt.close() - - if args.dump_waveforms: - ext = args.audio_format - if wave_pred is not None: - wav_dir = out_root / f"{ext}_{sample_rate}hz_{vocoder}" - wav_dir.mkdir(exist_ok=True, parents=True) - sf.write(wav_dir / f"{sample_id}.{ext}", wave_pred, sample_rate) - if args.dump_target and wave_targ is not None: - wav_tgt_dir = out_root / f"{ext}_{sample_rate}hz_{vocoder}_tgt" - wav_tgt_dir.mkdir(exist_ok=True, parents=True) - sf.write(wav_tgt_dir / f"{sample_id}.{ext}", wave_targ, sample_rate) - - -def main(args): - assert(args.dump_features or args.dump_waveforms or args.dump_attentions - or args.dump_eos_probs or args.dump_plots) - if args.max_tokens is None and args.batch_size is None: - args.max_tokens = 8000 - logger.info(args) - - use_cuda = torch.cuda.is_available() and not args.cpu - task = tasks.setup_task(args) - models, saved_cfg, task = checkpoint_utils.load_model_ensemble_and_task( - [args.path], - task=task, - ) - model = models[0].cuda() if use_cuda else models[0] - # use the original n_frames_per_step - task.args.n_frames_per_step = saved_cfg.task.n_frames_per_step - task.load_dataset(args.gen_subset, task_cfg=saved_cfg.task) - - data_cfg = task.data_cfg - sample_rate = data_cfg.config.get("features", {}).get("sample_rate", 22050) - resample_fn = { - False: lambda x: x, - True: lambda x: torchaudio.sox_effects.apply_effects_tensor( - x.detach().cpu().unsqueeze(0), sample_rate, - [['rate', str(args.output_sample_rate)]] - )[0].squeeze(0) - }.get(args.output_sample_rate != sample_rate) - if args.output_sample_rate != sample_rate: - logger.info(f"resampling to {args.output_sample_rate}Hz") - - generator = task.build_generator([model], args) - itr = task.get_batch_iterator( - dataset=task.dataset(args.gen_subset), - max_tokens=args.max_tokens, - max_sentences=args.batch_size, - max_positions=(sys.maxsize, sys.maxsize), - ignore_invalid_inputs=args.skip_invalid_size_inputs_valid_test, - required_batch_size_multiple=args.required_batch_size_multiple, - num_shards=args.num_shards, - shard_id=args.shard_id, - num_workers=args.num_workers, - data_buffer_size=args.data_buffer_size, - ).next_epoch_itr(shuffle=False) - - Path(args.results_path).mkdir(exist_ok=True, parents=True) - is_na_model = getattr(model, "NON_AUTOREGRESSIVE", False) - dataset = task.dataset(args.gen_subset) - vocoder = task.args.vocoder - with progress_bar.build_progress_bar(args, itr) as t: - for sample in t: - sample = utils.move_to_cuda(sample) if use_cuda else sample - hypos = generator.generate(model, sample, has_targ=args.dump_target) - for result in postprocess_results( - dataset, sample, hypos, resample_fn, args.dump_target - ): - dump_result(is_na_model, args, vocoder, *result) - - -def cli_main(): - parser = make_parser() - args = options.parse_args_and_arch(parser) - main(args) - - -if __name__ == "__main__": - cli_main() diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/criss/mining/mine.py b/spaces/OFA-Sys/OFA-vqa/fairseq/examples/criss/mining/mine.py deleted file mode 100644 index c872da196fe0df776622365748ad7963fee1f0a0..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/criss/mining/mine.py +++ /dev/null @@ -1,240 +0,0 @@ -#!/usr/bin/env python3 -u -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -import argparse -import glob -from subprocess import check_call - -try: - import faiss - - has_faiss = True -except ImportError: - has_faiss = False -import numpy as np - - -GB = 1024 * 1024 * 1024 - - -def call(cmd): - print(cmd) - check_call(cmd, shell=True) - - -def get_batches(directory, lang, prefix="all_avg_pool"): - print(f"Finding in {directory}/{prefix}.{lang}*") - files = glob.glob(f"{directory}/{prefix}.{lang}*") - emb_files = [] - txt_files = [] - for emb_fi in files: - emb_files.append(emb_fi) - txt_fi = emb_fi.replace(prefix, "sentences") - txt_files.append(txt_fi) - return emb_files, txt_files - - -def load_batch(emb_file, dim): - embeddings = np.fromfile(emb_file, dtype=np.float32) - num_rows = int(embeddings.shape[0] / dim) - embeddings = embeddings.reshape((num_rows, dim)) - faiss.normalize_L2(embeddings) - return embeddings - - -def knnGPU_sharded(x_batches_f, y_batches_f, dim, k, direction="x2y"): - if not has_faiss: - raise ImportError("Please install Faiss") - sims = [] - inds = [] - xfrom = 0 - xto = 0 - for x_batch_f in x_batches_f: - yfrom = 0 - yto = 0 - x_batch = load_batch(x_batch_f, dim) - xto = xfrom + x_batch.shape[0] - bsims, binds = [], [] - for y_batch_f in y_batches_f: - y_batch = load_batch(y_batch_f, dim) - neighbor_size = min(k, y_batch.shape[0]) - yto = yfrom + y_batch.shape[0] - print("{}-{} -> {}-{}".format(xfrom, xto, yfrom, yto)) - idx = faiss.IndexFlatIP(dim) - idx = faiss.index_cpu_to_all_gpus(idx) - idx.add(y_batch) - bsim, bind = idx.search(x_batch, neighbor_size) - - bsims.append(bsim) - binds.append(bind + yfrom) - yfrom += y_batch.shape[0] - del idx - del y_batch - bsims = np.concatenate(bsims, axis=1) - binds = np.concatenate(binds, axis=1) - aux = np.argsort(-bsims, axis=1) - sim_batch = np.zeros((x_batch.shape[0], k), dtype=np.float32) - ind_batch = np.zeros((x_batch.shape[0], k), dtype=np.int64) - for i in range(x_batch.shape[0]): - for j in range(k): - sim_batch[i, j] = bsims[i, aux[i, j]] - ind_batch[i, j] = binds[i, aux[i, j]] - sims.append(sim_batch) - inds.append(ind_batch) - xfrom += x_batch.shape[0] - del x_batch - sim = np.concatenate(sims, axis=0) - ind = np.concatenate(inds, axis=0) - return sim, ind - - -def score(sim, fwd_mean, bwd_mean, margin): - return margin(sim, (fwd_mean + bwd_mean) / 2) - - -def score_candidates( - sim_mat, candidate_inds, fwd_mean, bwd_mean, margin, verbose=False -): - print(" - scoring {:d} candidates".format(sim_mat.shape[0])) - scores = np.zeros(candidate_inds.shape) - for i in range(scores.shape[0]): - for j in range(scores.shape[1]): - k = int(candidate_inds[i, j]) - scores[i, j] = score(sim_mat[i, j], fwd_mean[i], bwd_mean[k], margin) - return scores - - -def load_text(files): - all_sentences = [] - for fi in files: - with open(fi) as sentence_fi: - for line in sentence_fi: - all_sentences.append(line.strip()) - print(f"Read {len(all_sentences)} sentences") - return all_sentences - - -if __name__ == "__main__": - parser = argparse.ArgumentParser(description="Mine bitext") - parser.add_argument("--src-lang", help="Source language") - parser.add_argument("--tgt-lang", help="Target language") - parser.add_argument( - "--dict-path", help="Path to dictionary file", default="dict.txt" - ) - parser.add_argument( - "--spm-path", help="Path to SPM model file", default="sentence.bpe.model" - ) - parser.add_argument("--dim", type=int, default=1024, help="Embedding dimension") - parser.add_argument("--mem", type=int, default=5, help="Memory in GB") - parser.add_argument("--src-dir", help="Source directory") - parser.add_argument("--tgt-dir", help="Target directory") - parser.add_argument("--output", help="Output path") - parser.add_argument( - "--neighborhood", type=int, default=4, help="Embedding dimension" - ) - parser.add_argument( - "--threshold", type=float, default=1.06, help="Threshold on mined bitext" - ) - parser.add_argument( - "--valid-size", - type=int, - default=2000, - help="Number of sentences used for validation set", - ) - parser.add_argument( - "--min-count", - type=int, - default=50000, - help="Min num sentences used for each language", - ) - args = parser.parse_args() - - x_batches_f, x_sents_f = get_batches(args.src_dir, args.src_lang) - y_batches_f, y_sents_f = get_batches(args.tgt_dir, args.tgt_lang) - margin = lambda a, b: a / b - y2x_sim, y2x_ind = knnGPU_sharded( - y_batches_f, x_batches_f, args.dim, args.neighborhood, direction="y2x" - ) - x2y_sim, x2y_ind = knnGPU_sharded( - x_batches_f, y_batches_f, args.dim, args.neighborhood, direction="x2y" - ) - - x2y_mean = x2y_sim.mean(axis=1) - y2x_mean = y2x_sim.mean(axis=1) - fwd_scores = score_candidates(x2y_sim, x2y_ind, x2y_mean, y2x_mean, margin) - bwd_scores = score_candidates(y2x_sim, y2x_ind, y2x_mean, x2y_mean, margin) - fwd_best = x2y_ind[np.arange(x2y_sim.shape[0]), fwd_scores.argmax(axis=1)] - bwd_best = y2x_ind[np.arange(y2x_sim.shape[0]), bwd_scores.argmax(axis=1)] - indices = np.stack( - ( - np.concatenate((np.arange(x2y_ind.shape[0]), bwd_best)), - np.concatenate((fwd_best, np.arange(y2x_ind.shape[0]))), - ), - axis=1, - ) - scores = np.concatenate((fwd_scores.max(axis=1), bwd_scores.max(axis=1))) - - x_sentences = load_text(x_sents_f) - y_sentences = load_text(y_sents_f) - - threshold = args.threshold - min_count = args.min_count - seen_src, seen_trg = set(), set() - directory = args.output - call(f"mkdir -p {directory}") - src_out = open( - f"{directory}/all.{args.src_lang}", - mode="w", - encoding="utf-8", - errors="surrogateescape", - ) - tgt_out = open( - f"{directory}/all.{args.tgt_lang}", - mode="w", - encoding="utf-8", - errors="surrogateescape", - ) - scores_out = open( - f"{directory}/all.scores", mode="w", encoding="utf-8", errors="surrogateescape" - ) - count = 0 - for i in np.argsort(-scores): - src_ind, trg_ind = indices[i] - if src_ind not in seen_src and trg_ind not in seen_trg: - seen_src.add(src_ind) - seen_trg.add(trg_ind) - if scores[i] > threshold or count < min_count: - if x_sentences[src_ind]: - print(scores[i], file=scores_out) - print(x_sentences[src_ind], file=src_out) - print(y_sentences[trg_ind], file=tgt_out) - count += 1 - else: - print(f"Ignoring sentence: {x_sentences[src_ind]}") - src_out.close() - tgt_out.close() - scores_out.close() - - print(f"Found {count} pairs for threshold={threshold}") - with open(f"{directory}/all.{args.src_lang}") as all_s, open( - f"{directory}/all.{args.tgt_lang}" - ) as all_t, open(f"{directory}/valid.{args.src_lang}", "w") as valid_s, open( - f"{directory}/valid.{args.tgt_lang}", "w" - ) as valid_t, open( - f"{directory}/train.{args.src_lang}", "w" - ) as train_s, open( - f"{directory}/train.{args.tgt_lang}", "w" - ) as train_t: - count = 0 - for s_line, t_line in zip(all_s, all_t): - s_line = s_line.split("\t")[1] - t_line = t_line.split("\t")[1] - if count >= args.valid_size: - train_s.write(s_line) - train_t.write(t_line) - else: - valid_s.write(s_line) - valid_t.write(t_line) - count += 1 diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/textless_nlp/gslm/metrics/abx_metrics/dump_abx_feats.py b/spaces/OFA-Sys/OFA-vqa/fairseq/examples/textless_nlp/gslm/metrics/abx_metrics/dump_abx_feats.py deleted file mode 100644 index 41cf558970608fa5a9241e91e59ba214b609dc73..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/textless_nlp/gslm/metrics/abx_metrics/dump_abx_feats.py +++ /dev/null @@ -1,107 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import argparse -import logging -import os - -import joblib -import numpy as np - -from examples.textless_nlp.gslm.speech2unit.clustering.utils import get_audio_files -from examples.textless_nlp.gslm.speech2unit.pretrained.utils import get_features - -def get_logger(): - log_format = "[%(asctime)s] [%(levelname)s]: %(message)s" - logging.basicConfig(format=log_format, level=logging.INFO) - logger = logging.getLogger(__name__) - return logger - -def get_parser(): - parser = argparse.ArgumentParser( - description="Quantize using K-means clustering over acoustic features." - ) - parser.add_argument( - "--feature_type", - type=str, - choices=["logmel", "hubert", "w2v2", "cpc"], - default=None, - required=True, - help="Acoustic feature type", - ) - parser.add_argument( - "--kmeans_model_path", - type=str, - required=True, - help="K-means model file path to use for inference", - ) - parser.add_argument( - "--manifest_path", - type=str, - default=None, - help="Manifest file containing the root dir and file names", - ) - parser.add_argument( - "--checkpoint_path", - type=str, - help="Pretrained model checkpoint", - ) - parser.add_argument( - "--layer", - type=int, - help="The layer of the pretrained model to extract features from", - default=-1, - ) - parser.add_argument( - "--out_dir_path", - required=True, - type=str, - help="File path of quantized output.", - ) - parser.add_argument( - "--extension", type=str, default=".flac", help="Features file path" - ) - return parser - - -def one_hot(feat, n_clusters): - return np.eye(n_clusters)[feat] - -def main(args, logger): - # Feature extraction - logger.info(f"Extracting {args.feature_type} acoustic features...") - features_batch = get_features( - feature_type=args.feature_type, - checkpoint_path=args.checkpoint_path, - layer=args.layer, - manifest_path=args.manifest_path, - sample_pct=1.0, - flatten=False, - ) - logger.info(f"Features extracted for {len(features_batch)} utterances.\n") - logger.info(f"Dimensionality of representation = {features_batch[0].shape[1]}") - - logger.info(f"Loading K-means model from {args.kmeans_model_path} ...") - kmeans_model = joblib.load(open(args.kmeans_model_path, "rb")) - kmeans_model.verbose = False - - _, fnames, _ = get_audio_files(args.manifest_path) - - os.makedirs(args.out_dir_path, exist_ok=True) - logger.info(f"Writing quantized features to {args.out_dir_path}") - for i, feats in enumerate(features_batch): - pred = kmeans_model.predict(feats) - emb = one_hot(pred, kmeans_model.n_clusters) - base_fname = os.path.basename(fnames[i]).rstrip(args.extension) - output_path = os.path.join(args.out_dir_path, f"{base_fname}.npy") - with open(output_path, "wb") as f: - np.save(f, emb) - -if __name__ == "__main__": - parser = get_parser() - args = parser.parse_args() - logger = get_logger() - logger.info(args) - main(args, logger) diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/textless_nlp/gslm/unit2speech/glow.py b/spaces/OFA-Sys/OFA-vqa/fairseq/examples/textless_nlp/gslm/unit2speech/glow.py deleted file mode 100644 index 7a7696403d505afdf0f1606f8220801b0f46152f..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/textless_nlp/gslm/unit2speech/glow.py +++ /dev/null @@ -1,311 +0,0 @@ -# ***************************************************************************** -# Copyright (c) 2018, NVIDIA CORPORATION. All rights reserved. -# -# Redistribution and use in source and binary forms, with or without -# modification, are permitted provided that the following conditions are met: -# * Redistributions of source code must retain the above copyright -# notice, this list of conditions and the following disclaimer. -# * Redistributions in binary form must reproduce the above copyright -# notice, this list of conditions and the following disclaimer in the -# documentation and/or other materials provided with the distribution. -# * Neither the name of the NVIDIA CORPORATION nor the -# names of its contributors may be used to endorse or promote products -# derived from this software without specific prior written permission. -# -# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND -# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED -# WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE -# DISCLAIMED. IN NO EVENT SHALL NVIDIA CORPORATION BE LIABLE FOR ANY -# DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES -# (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; -# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND -# ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT -# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS -# SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -# -# ***************************************************************************** -import copy -import torch -from torch.autograd import Variable -import torch.nn.functional as F - - -@torch.jit.script -def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels): - n_channels_int = n_channels[0] - in_act = input_a+input_b - t_act = torch.tanh(in_act[:, :n_channels_int, :]) - s_act = torch.sigmoid(in_act[:, n_channels_int:, :]) - acts = t_act * s_act - return acts - - -class WaveGlowLoss(torch.nn.Module): - def __init__(self, sigma=1.0): - super(WaveGlowLoss, self).__init__() - self.sigma = sigma - - def forward(self, model_output): - z, log_s_list, log_det_W_list = model_output - for i, log_s in enumerate(log_s_list): - if i == 0: - log_s_total = torch.sum(log_s) - log_det_W_total = log_det_W_list[i] - else: - log_s_total = log_s_total + torch.sum(log_s) - log_det_W_total += log_det_W_list[i] - - loss = torch.sum(z*z)/(2*self.sigma*self.sigma) - log_s_total - log_det_W_total - return loss/(z.size(0)*z.size(1)*z.size(2)) - - -class Invertible1x1Conv(torch.nn.Module): - """ - The layer outputs both the convolution, and the log determinant - of its weight matrix. If reverse=True it does convolution with - inverse - """ - def __init__(self, c): - super(Invertible1x1Conv, self).__init__() - self.conv = torch.nn.Conv1d(c, c, kernel_size=1, stride=1, padding=0, - bias=False) - - # Sample a random orthonormal matrix to initialize weights - W = torch.qr(torch.FloatTensor(c, c).normal_())[0] - - # Ensure determinant is 1.0 not -1.0 - if torch.det(W) < 0: - W[:,0] = -1*W[:,0] - W = W.view(c, c, 1) - self.conv.weight.data = W - - def forward(self, z, reverse=False): - # shape - batch_size, group_size, n_of_groups = z.size() - - W = self.conv.weight.squeeze() - - if reverse: - if not hasattr(self, 'W_inverse'): - # Reverse computation - W_inverse = W.float().inverse() - W_inverse = Variable(W_inverse[..., None]) - if z.type() == 'torch.cuda.HalfTensor': - W_inverse = W_inverse.half() - self.W_inverse = W_inverse - z = F.conv1d(z, self.W_inverse, bias=None, stride=1, padding=0) - return z - else: - # Forward computation - log_det_W = batch_size * n_of_groups * torch.logdet(W) - z = self.conv(z) - return z, log_det_W - - -class WN(torch.nn.Module): - """ - This is the WaveNet like layer for the affine coupling. The primary difference - from WaveNet is the convolutions need not be causal. There is also no dilation - size reset. The dilation only doubles on each layer - """ - def __init__(self, n_in_channels, n_mel_channels, n_layers, n_channels, - kernel_size): - super(WN, self).__init__() - assert(kernel_size % 2 == 1) - assert(n_channels % 2 == 0) - self.n_layers = n_layers - self.n_channels = n_channels - self.in_layers = torch.nn.ModuleList() - self.res_skip_layers = torch.nn.ModuleList() - - start = torch.nn.Conv1d(n_in_channels, n_channels, 1) - start = torch.nn.utils.weight_norm(start, name='weight') - self.start = start - - # Initializing last layer to 0 makes the affine coupling layers - # do nothing at first. This helps with training stability - end = torch.nn.Conv1d(n_channels, 2*n_in_channels, 1) - end.weight.data.zero_() - end.bias.data.zero_() - self.end = end - - cond_layer = torch.nn.Conv1d(n_mel_channels, 2*n_channels*n_layers, 1) - self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name='weight') - - for i in range(n_layers): - dilation = 2 ** i - padding = int((kernel_size*dilation - dilation)/2) - in_layer = torch.nn.Conv1d(n_channels, 2*n_channels, kernel_size, - dilation=dilation, padding=padding) - in_layer = torch.nn.utils.weight_norm(in_layer, name='weight') - self.in_layers.append(in_layer) - - - # last one is not necessary - if i < n_layers - 1: - res_skip_channels = 2*n_channels - else: - res_skip_channels = n_channels - res_skip_layer = torch.nn.Conv1d(n_channels, res_skip_channels, 1) - res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name='weight') - self.res_skip_layers.append(res_skip_layer) - - def forward(self, forward_input): - audio, spect = forward_input - audio = self.start(audio) - output = torch.zeros_like(audio) - n_channels_tensor = torch.IntTensor([self.n_channels]) - - spect = self.cond_layer(spect) - - for i in range(self.n_layers): - spect_offset = i*2*self.n_channels - acts = fused_add_tanh_sigmoid_multiply( - self.in_layers[i](audio), - spect[:,spect_offset:spect_offset+2*self.n_channels,:], - n_channels_tensor) - - res_skip_acts = self.res_skip_layers[i](acts) - if i < self.n_layers - 1: - audio = audio + res_skip_acts[:,:self.n_channels,:] - output = output + res_skip_acts[:,self.n_channels:,:] - else: - output = output + res_skip_acts - - return self.end(output) - - -class WaveGlow(torch.nn.Module): - def __init__(self, n_mel_channels, n_flows, n_group, n_early_every, - n_early_size, WN_config): - super(WaveGlow, self).__init__() - - self.upsample = torch.nn.ConvTranspose1d(n_mel_channels, - n_mel_channels, - 1024, stride=256) - assert(n_group % 2 == 0) - self.n_flows = n_flows - self.n_group = n_group - self.n_early_every = n_early_every - self.n_early_size = n_early_size - self.WN = torch.nn.ModuleList() - self.convinv = torch.nn.ModuleList() - - n_half = int(n_group/2) - - # Set up layers with the right sizes based on how many dimensions - # have been output already - n_remaining_channels = n_group - for k in range(n_flows): - if k % self.n_early_every == 0 and k > 0: - n_half = n_half - int(self.n_early_size/2) - n_remaining_channels = n_remaining_channels - self.n_early_size - self.convinv.append(Invertible1x1Conv(n_remaining_channels)) - self.WN.append(WN(n_half, n_mel_channels*n_group, **WN_config)) - self.n_remaining_channels = n_remaining_channels # Useful during inference - - def forward(self, forward_input): - """ - forward_input[0] = mel_spectrogram: batch x n_mel_channels x frames - forward_input[1] = audio: batch x time - """ - spect, audio = forward_input - - # Upsample spectrogram to size of audio - spect = self.upsample(spect) - assert(spect.size(2) >= audio.size(1)) - if spect.size(2) > audio.size(1): - spect = spect[:, :, :audio.size(1)] - - spect = spect.unfold(2, self.n_group, self.n_group).permute(0, 2, 1, 3) - spect = spect.contiguous().view(spect.size(0), spect.size(1), -1).permute(0, 2, 1) - - audio = audio.unfold(1, self.n_group, self.n_group).permute(0, 2, 1) - output_audio = [] - log_s_list = [] - log_det_W_list = [] - - for k in range(self.n_flows): - if k % self.n_early_every == 0 and k > 0: - output_audio.append(audio[:,:self.n_early_size,:]) - audio = audio[:,self.n_early_size:,:] - - audio, log_det_W = self.convinv[k](audio) - log_det_W_list.append(log_det_W) - - n_half = int(audio.size(1)/2) - audio_0 = audio[:,:n_half,:] - audio_1 = audio[:,n_half:,:] - - output = self.WN[k]((audio_0, spect)) - log_s = output[:, n_half:, :] - b = output[:, :n_half, :] - audio_1 = torch.exp(log_s)*audio_1 + b - log_s_list.append(log_s) - - audio = torch.cat([audio_0, audio_1],1) - - output_audio.append(audio) - return torch.cat(output_audio,1), log_s_list, log_det_W_list - - def infer(self, spect, sigma=1.0): - spect = self.upsample(spect) - # trim conv artifacts. maybe pad spec to kernel multiple - time_cutoff = self.upsample.kernel_size[0] - self.upsample.stride[0] - spect = spect[:, :, :-time_cutoff] - - spect = spect.unfold(2, self.n_group, self.n_group).permute(0, 2, 1, 3) - spect = spect.contiguous().view(spect.size(0), spect.size(1), -1).permute(0, 2, 1) - - if spect.type() == 'torch.cuda.HalfTensor': - audio = torch.cuda.HalfTensor(spect.size(0), - self.n_remaining_channels, - spect.size(2)).normal_() - else: - audio = torch.cuda.FloatTensor(spect.size(0), - self.n_remaining_channels, - spect.size(2)).normal_() - - audio = torch.autograd.Variable(sigma*audio) - - for k in reversed(range(self.n_flows)): - n_half = int(audio.size(1)/2) - audio_0 = audio[:,:n_half,:] - audio_1 = audio[:,n_half:,:] - - output = self.WN[k]((audio_0, spect)) - - s = output[:, n_half:, :] - b = output[:, :n_half, :] - audio_1 = (audio_1 - b)/torch.exp(s) - audio = torch.cat([audio_0, audio_1],1) - - audio = self.convinv[k](audio, reverse=True) - - if k % self.n_early_every == 0 and k > 0: - if spect.type() == 'torch.cuda.HalfTensor': - z = torch.cuda.HalfTensor(spect.size(0), self.n_early_size, spect.size(2)).normal_() - else: - z = torch.cuda.FloatTensor(spect.size(0), self.n_early_size, spect.size(2)).normal_() - audio = torch.cat((sigma*z, audio),1) - - audio = audio.permute(0,2,1).contiguous().view(audio.size(0), -1).data - return audio - - @staticmethod - def remove_weightnorm(model): - waveglow = model - for WN in waveglow.WN: - WN.start = torch.nn.utils.remove_weight_norm(WN.start) - WN.in_layers = remove(WN.in_layers) - WN.cond_layer = torch.nn.utils.remove_weight_norm(WN.cond_layer) - WN.res_skip_layers = remove(WN.res_skip_layers) - return waveglow - - -def remove(conv_list): - new_conv_list = torch.nn.ModuleList() - for old_conv in conv_list: - old_conv = torch.nn.utils.remove_weight_norm(old_conv) - new_conv_list.append(old_conv) - return new_conv_list diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/modules/quantization/scalar/ops.py b/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/modules/quantization/scalar/ops.py deleted file mode 100644 index c74f530380b393ffc53ecfb1398000079495772f..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/modules/quantization/scalar/ops.py +++ /dev/null @@ -1,54 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import torch - - -def emulate_int(w, bits, method, scale=None, zero_point=None): - q = globals()[f"emulate_int8_{method}"] - return q(w, scale=scale, zero_point=zero_point, bits=bits) - - -def quantize(w, scale, zero_point, bits=8): - # In the default behavior, max_val = 255. - max_val = 2 ** bits - 1 - return ( - torch.clamp(torch.round(w / scale + zero_point), 0, max_val) - zero_point - ) * scale - - -def emulate_int8_histogram(w, scale=None, zero_point=None, bits=8): - if scale is None: - obs = torch.quantization.observer.HistogramObserver() - obs.to(device=w.device) - _ = obs(w.float()) - scale, zero_point = obs.calculate_qparams() - scale = scale.cuda().type_as(w) - zero_point = zero_point.cuda().type_as(w) - return quantize(w, scale, zero_point, bits=bits), scale, zero_point - - -def emulate_int8_channel(w, scale=None, zero_point=None, bits=8): - if scale is None: - obs = torch.quantization.observer.PerChannelMinMaxObserver( - ch_axis=-1, qscheme=torch.per_channel_symmetric - ) - obs.to(device=w.device) - _ = obs(w) - scale, zero_point, ch_axis = obs.get_qparams() - scale = scale.cuda().type_as(w) - zero_point = zero_point.cuda().type_as(w) - return quantize(w, scale, zero_point, bits=bits), scale, zero_point - - -def emulate_int8_tensor(w, scale=None, zero_point=None, bits=8): - if scale is None: - obs = torch.quantization.observer.MinMaxObserver() - obs.to(device=w.device) - _ = obs(w) - scale, zero_point = obs.calculate_qparams() - scale = scale.cuda().type_as(w) - zero_point = zero_point.cuda().type_as(w) - return quantize(w, scale, zero_point, bits=bits), scale, zero_point diff --git a/spaces/OpenGVLab/InternGPT/third-party/lama/bin/models/ade20k/segm_lib/nn/modules/unittest.py b/spaces/OpenGVLab/InternGPT/third-party/lama/bin/models/ade20k/segm_lib/nn/modules/unittest.py deleted file mode 100644 index 0675c022e4ba85d38d1f813490f6740150909524..0000000000000000000000000000000000000000 --- a/spaces/OpenGVLab/InternGPT/third-party/lama/bin/models/ade20k/segm_lib/nn/modules/unittest.py +++ /dev/null @@ -1,29 +0,0 @@ -# -*- coding: utf-8 -*- -# File : unittest.py -# Author : Jiayuan Mao -# Email : maojiayuan@gmail.com -# Date : 27/01/2018 -# -# This file is part of Synchronized-BatchNorm-PyTorch. -# https://github.com/vacancy/Synchronized-BatchNorm-PyTorch -# Distributed under MIT License. - -import unittest - -import numpy as np -from torch.autograd import Variable - - -def as_numpy(v): - if isinstance(v, Variable): - v = v.data - return v.cpu().numpy() - - -class TorchTestCase(unittest.TestCase): - def assertTensorClose(self, a, b, atol=1e-3, rtol=1e-3): - npa, npb = as_numpy(a), as_numpy(b) - self.assertTrue( - np.allclose(npa, npb, atol=atol), - 'Tensor close check failed\n{}\n{}\nadiff={}, rdiff={}'.format(a, b, np.abs(npa - npb).max(), np.abs((npa - npb) / np.fmax(npa, 1e-5)).max()) - ) diff --git a/spaces/OpenMind-AI/starchat-playground/share_btn.py b/spaces/OpenMind-AI/starchat-playground/share_btn.py deleted file mode 100644 index 14c0cc9147bd6aaadd9c1df07a763b542d696987..0000000000000000000000000000000000000000 --- a/spaces/OpenMind-AI/starchat-playground/share_btn.py +++ /dev/null @@ -1,111 +0,0 @@ -community_icon_html = """""" - -loading_icon_html = """""" - -share_js = """async () => { - async function uploadFile(file){ - const UPLOAD_URL = 'https://huggingface.co/uploads'; - const response = await fetch(UPLOAD_URL, { - method: 'POST', - headers: { - 'Content-Type': file.type, - 'X-Requested-With': 'XMLHttpRequest', - }, - body: file, /// <- File inherits from Blob - }); - const url = await response.text(); - return url; - } - - async function getInputImgFile(imgEl){ - const res = await fetch(imgEl.src); - const blob = await res.blob(); - const imgId = Date.now() % 200; - const isPng = imgEl.src.startsWith(`data:image/png`); - if(isPng){ - const fileName = `sd-perception-${{imgId}}.png`; - return new File([blob], fileName, { type: 'image/png' }); - }else{ - const fileName = `sd-perception-${{imgId}}.jpg`; - return new File([blob], fileName, { type: 'image/jpeg' }); - } - } - - // const gradioEl = document.querySelector('body > gradio-app'); - const gradioEl = document.querySelector("gradio-app"); - const inputTxt = gradioEl.querySelector('#q-input textarea').value; - const outputTxt = gradioEl.querySelector('#q-output').outerHTML; - - const titleLength = 150; - let titleTxt = inputTxt; - if(titleTxt.length > titleLength){ - titleTxt = titleTxt.slice(0, titleLength) + ' ...'; - } - - const shareBtnEl = gradioEl.querySelector('#share-btn'); - const shareIconEl = gradioEl.querySelector('#share-btn-share-icon'); - const loadingIconEl = gradioEl.querySelector('#share-btn-loading-icon'); - - if(!inputTxt || !outputTxt){ - return; - }; - - shareBtnEl.style.pointerEvents = 'none'; - shareIconEl.style.display = 'none'; - loadingIconEl.style.removeProperty('display'); - - const descriptionMd = `### Question: -${inputTxt} - -### Answer: - -${outputTxt}`; - - const params = { - title: titleTxt, - description: descriptionMd, - }; - - const paramsStr = Object.entries(params) - .map(([key, value]) => `${encodeURIComponent(key)}=${encodeURIComponent(value)}`) - .join('&'); - - window.open(`https://huggingface.co/spaces/HuggingFaceH4/star-chat-demo/discussions/new?${paramsStr}`, '_blank'); - - shareBtnEl.style.removeProperty('pointer-events'); - shareIconEl.style.removeProperty('display'); - loadingIconEl.style.display = 'none'; -}""" - -share_btn_css = """ -a {text-decoration-line: underline; font-weight: 600;} -.animate-spin { - animation: spin 1s linear infinite; -} -@keyframes spin { - from { transform: rotate(0deg); } - to { transform: rotate(360deg); } -} -#share-btn-container { - display: flex; padding-left: 0.5rem !important; padding-right: 0.5rem !important; background-color: #000000; justify-content: center; align-items: center; border-radius: 9999px !important; width: 13rem; -} -#share-btn { - all: initial; color: #ffffff;font-weight: 600; cursor:pointer; font-family: 'IBM Plex Sans', sans-serif; margin-left: 0.5rem !important; padding-top: 0.25rem !important; padding-bottom: 0.25rem !important; -} -#share-btn * { - all: unset; -} -#share-btn-container div:nth-child(-n+2){ - width: auto !important; - min-height: 0px !important; -} -#share-btn-container .wrap { - display: none !important; -} -""" diff --git a/spaces/OpenMotionLab/MotionGPT/pyrender/docs/source/conf.py b/spaces/OpenMotionLab/MotionGPT/pyrender/docs/source/conf.py deleted file mode 100644 index 6bf194c375e7e789b334a838953adfeaf2eb59b6..0000000000000000000000000000000000000000 --- a/spaces/OpenMotionLab/MotionGPT/pyrender/docs/source/conf.py +++ /dev/null @@ -1,352 +0,0 @@ -# -*- coding: utf-8 -*- -# -# core documentation build configuration file, created by -# sphinx-quickstart on Sun Oct 16 14:33:48 2016. -# -# This file is execfile()d with the current directory set to its -# containing dir. -# -# Note that not all possible configuration values are present in this -# autogenerated file. -# -# All configuration values have a default; values that are commented out -# serve to show the default. - -import sys -import os -from pyrender import __version__ -from sphinx.domains.python import PythonDomain - -# If extensions (or modules to document with autodoc) are in another directory, -# add these directories to sys.path here. If the directory is relative to the -# documentation root, use os.path.abspath to make it absolute, like shown here. -sys.path.insert(0, os.path.abspath('../../')) - -# -- General configuration ------------------------------------------------ - -# If your documentation needs a minimal Sphinx version, state it here. -#needs_sphinx = '1.0' - -# Add any Sphinx extension module names here, as strings. They can be -# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom -# ones. -extensions = [ - 'sphinx.ext.autodoc', - 'sphinx.ext.autosummary', - 'sphinx.ext.coverage', - 'sphinx.ext.githubpages', - 'sphinx.ext.intersphinx', - 'sphinx.ext.napoleon', - 'sphinx.ext.viewcode', - 'sphinx_automodapi.automodapi', - 'sphinx_automodapi.smart_resolver' -] -numpydoc_class_members_toctree = False -automodapi_toctreedirnm = 'generated' -automodsumm_inherited_members = True - -# Add any paths that contain templates here, relative to this directory. -templates_path = ['_templates'] - -# The suffix(es) of source filenames. -# You can specify multiple suffix as a list of string: -# source_suffix = ['.rst', '.md'] -source_suffix = '.rst' - -# The encoding of source files. -#source_encoding = 'utf-8-sig' - -# The master toctree document. -master_doc = 'index' - -# General information about the project. -project = u'pyrender' -copyright = u'2018, Matthew Matl' -author = u'Matthew Matl' - -# The version info for the project you're documenting, acts as replacement for -# |version| and |release|, also used in various other places throughout the -# built documents. -# -# The short X.Y version. -version = __version__ -# The full version, including alpha/beta/rc tags. -release = __version__ - -# The language for content autogenerated by Sphinx. Refer to documentation -# for a list of supported languages. -# -# This is also used if you do content translation via gettext catalogs. -# Usually you set "language" from the command line for these cases. -language = None - -# There are two options for replacing |today|: either, you set today to some -# non-false value, then it is used: -#today = '' -# Else, today_fmt is used as the format for a strftime call. -#today_fmt = '%B %d, %Y' - -# List of patterns, relative to source directory, that match files and -# directories to ignore when looking for source files. -exclude_patterns = [] - -# The reST default role (used for this markup: `text`) to use for all -# documents. -#default_role = None - -# If true, '()' will be appended to :func: etc. cross-reference text. -#add_function_parentheses = True - -# If true, the current module name will be prepended to all description -# unit titles (such as .. function::). -#add_module_names = True - -# If true, sectionauthor and moduleauthor directives will be shown in the -# output. They are ignored by default. -#show_authors = False - -# The name of the Pygments (syntax highlighting) style to use. -pygments_style = 'sphinx' - -# A list of ignored prefixes for module index sorting. -#modindex_common_prefix = [] - -# If true, keep warnings as "system message" paragraphs in the built documents. -#keep_warnings = False - -# If true, `todo` and `todoList` produce output, else they produce nothing. -todo_include_todos = False - - -# -- Options for HTML output ---------------------------------------------- - -# The theme to use for HTML and HTML Help pages. See the documentation for -# a list of builtin themes. -import sphinx_rtd_theme -html_theme = 'sphinx_rtd_theme' -html_theme_path = [sphinx_rtd_theme.get_html_theme_path()] - -# Theme options are theme-specific and customize the look and feel of a theme -# further. For a list of options available for each theme, see the -# documentation. -#html_theme_options = {} - -# Add any paths that contain custom themes here, relative to this directory. -#html_theme_path = [] - -# The name for this set of Sphinx documents. If None, it defaults to -# " v documentation". -#html_title = None - -# A shorter title for the navigation bar. Default is the same as html_title. -#html_short_title = None - -# The name of an image file (relative to this directory) to place at the top -# of the sidebar. -#html_logo = None - -# The name of an image file (relative to this directory) to use as a favicon of -# the docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32 -# pixels large. -#html_favicon = None - -# Add any paths that contain custom static files (such as style sheets) here, -# relative to this directory. They are copied after the builtin static files, -# so a file named "default.css" will overwrite the builtin "default.css". -html_static_path = ['_static'] - -# Add any extra paths that contain custom files (such as robots.txt or -# .htaccess) here, relative to this directory. These files are copied -# directly to the root of the documentation. -#html_extra_path = [] - -# If not '', a 'Last updated on:' timestamp is inserted at every page bottom, -# using the given strftime format. -#html_last_updated_fmt = '%b %d, %Y' - -# If true, SmartyPants will be used to convert quotes and dashes to -# typographically correct entities. -#html_use_smartypants = True - -# Custom sidebar templates, maps document names to template names. -#html_sidebars = {} - -# Additional templates that should be rendered to pages, maps page names to -# template names. -#html_additional_pages = {} - -# If false, no module index is generated. -#html_domain_indices = True - -# If false, no index is generated. -#html_use_index = True - -# If true, the index is split into individual pages for each letter. -#html_split_index = False - -# If true, links to the reST sources are added to the pages. -#html_show_sourcelink = True - -# If true, "Created using Sphinx" is shown in the HTML footer. Default is True. -#html_show_sphinx = True - -# If true, "(C) Copyright ..." is shown in the HTML footer. Default is True. -#html_show_copyright = True - -# If true, an OpenSearch description file will be output, and all pages will -# contain a tag referring to it. The value of this option must be the -# base URL from which the finished HTML is served. -#html_use_opensearch = '' - -# This is the file name suffix for HTML files (e.g. ".xhtml"). -#html_file_suffix = None - -# Language to be used for generating the HTML full-text search index. -# Sphinx supports the following languages: -# 'da', 'de', 'en', 'es', 'fi', 'fr', 'hu', 'it', 'ja' -# 'nl', 'no', 'pt', 'ro', 'ru', 'sv', 'tr' -#html_search_language = 'en' - -# A dictionary with options for the search language support, empty by default. -# Now only 'ja' uses this config value -#html_search_options = {'type': 'default'} - -# The name of a javascript file (relative to the configuration directory) that -# implements a search results scorer. If empty, the default will be used. -#html_search_scorer = 'scorer.js' - -# Output file base name for HTML help builder. -htmlhelp_basename = 'coredoc' - -# -- Options for LaTeX output --------------------------------------------- - -latex_elements = { -# The paper size ('letterpaper' or 'a4paper'). -#'papersize': 'letterpaper', - -# The font size ('10pt', '11pt' or '12pt'). -#'pointsize': '10pt', - -# Additional stuff for the LaTeX preamble. -#'preamble': '', - -# Latex figure (float) alignment -#'figure_align': 'htbp', -} - -# Grouping the document tree into LaTeX files. List of tuples -# (source start file, target name, title, -# author, documentclass [howto, manual, or own class]). -latex_documents = [ - (master_doc, 'pyrender.tex', u'pyrender Documentation', - u'Matthew Matl', 'manual'), -] - -# The name of an image file (relative to this directory) to place at the top of -# the title page. -#latex_logo = None - -# For "manual" documents, if this is true, then toplevel headings are parts, -# not chapters. -#latex_use_parts = False - -# If true, show page references after internal links. -#latex_show_pagerefs = False - -# If true, show URL addresses after external links. -#latex_show_urls = False - -# Documents to append as an appendix to all manuals. -#latex_appendices = [] - -# If false, no module index is generated. -#latex_domain_indices = True - - -# -- Options for manual page output --------------------------------------- - -# One entry per manual page. List of tuples -# (source start file, name, description, authors, manual section). -man_pages = [ - (master_doc, 'pyrender', u'pyrender Documentation', - [author], 1) -] - -# If true, show URL addresses after external links. -#man_show_urls = False - - -# -- Options for Texinfo output ------------------------------------------- - -# Grouping the document tree into Texinfo files. List of tuples -# (source start file, target name, title, author, -# dir menu entry, description, category) -texinfo_documents = [ - (master_doc, 'pyrender', u'pyrender Documentation', - author, 'pyrender', 'One line description of project.', - 'Miscellaneous'), -] - -# Documents to append as an appendix to all manuals. -#texinfo_appendices = [] - -# If false, no module index is generated. -#texinfo_domain_indices = True - -# How to display URL addresses: 'footnote', 'no', or 'inline'. -#texinfo_show_urls = 'footnote' - -# If true, do not generate a @detailmenu in the "Top" node's menu. -#texinfo_no_detailmenu = False - -intersphinx_mapping = { - 'python' : ('https://docs.python.org/', None), - 'pyrender' : ('https://pyrender.readthedocs.io/en/latest/', None), -} - -# Autosummary fix -autosummary_generate = True - -# Try to suppress multiple-definition warnings by always taking the shorter -# path when two or more paths have the same base module - -class MyPythonDomain(PythonDomain): - - def find_obj(self, env, modname, classname, name, type, searchmode=0): - """Ensures an object always resolves to the desired module - if defined there.""" - orig_matches = PythonDomain.find_obj( - self, env, modname, classname, name, type, searchmode - ) - - if len(orig_matches) <= 1: - return orig_matches - - # If multiple matches, try to take the shortest if all the modules are - # the same - first_match_name_sp = orig_matches[0][0].split('.') - base_name = first_match_name_sp[0] - min_len = len(first_match_name_sp) - best_match = orig_matches[0] - - for match in orig_matches[1:]: - match_name = match[0] - match_name_sp = match_name.split('.') - match_base = match_name_sp[0] - - # If we have mismatched bases, return them all to trigger warnings - if match_base != base_name: - return orig_matches - - # Otherwise, check and see if it's shorter - if len(match_name_sp) < min_len: - min_len = len(match_name_sp) - best_match = match - - return (best_match,) - - -def setup(sphinx): - """Use MyPythonDomain in place of PythonDomain""" - sphinx.override_domain(MyPythonDomain) - diff --git a/spaces/Pengyey/bingo-chuchu/src/lib/isomorphic/browser.ts b/spaces/Pengyey/bingo-chuchu/src/lib/isomorphic/browser.ts deleted file mode 100644 index de125b1f1786d1618cb1ff47f403d76c6784f4ce..0000000000000000000000000000000000000000 --- a/spaces/Pengyey/bingo-chuchu/src/lib/isomorphic/browser.ts +++ /dev/null @@ -1,11 +0,0 @@ -'use client' - -const debug = console.info.bind(console) - -class WebSocketAlias extends WebSocket { - constructor(address: string | URL, ...args: any) { - super(address) - } -} - -export default { fetch, WebSocket: WebSocketAlias, debug } diff --git a/spaces/Plachta/VITS-Umamusume-voice-synthesizer/text/korean.py b/spaces/Plachta/VITS-Umamusume-voice-synthesizer/text/korean.py deleted file mode 100644 index edee07429a450c55e3d8e246997faaa1e0b89cc9..0000000000000000000000000000000000000000 --- a/spaces/Plachta/VITS-Umamusume-voice-synthesizer/text/korean.py +++ /dev/null @@ -1,210 +0,0 @@ -import re -from jamo import h2j, j2hcj -import ko_pron - - -# This is a list of Korean classifiers preceded by pure Korean numerals. -_korean_classifiers = '군데 권 개 그루 닢 대 두 마리 모 모금 뭇 발 발짝 방 번 벌 보루 살 수 술 시 쌈 움큼 정 짝 채 척 첩 축 켤레 톨 통' - -# List of (hangul, hangul divided) pairs: -_hangul_divided = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('ㄳ', 'ㄱㅅ'), - ('ㄵ', 'ㄴㅈ'), - ('ㄶ', 'ㄴㅎ'), - ('ㄺ', 'ㄹㄱ'), - ('ㄻ', 'ㄹㅁ'), - ('ㄼ', 'ㄹㅂ'), - ('ㄽ', 'ㄹㅅ'), - ('ㄾ', 'ㄹㅌ'), - ('ㄿ', 'ㄹㅍ'), - ('ㅀ', 'ㄹㅎ'), - ('ㅄ', 'ㅂㅅ'), - ('ㅘ', 'ㅗㅏ'), - ('ㅙ', 'ㅗㅐ'), - ('ㅚ', 'ㅗㅣ'), - ('ㅝ', 'ㅜㅓ'), - ('ㅞ', 'ㅜㅔ'), - ('ㅟ', 'ㅜㅣ'), - ('ㅢ', 'ㅡㅣ'), - ('ㅑ', 'ㅣㅏ'), - ('ㅒ', 'ㅣㅐ'), - ('ㅕ', 'ㅣㅓ'), - ('ㅖ', 'ㅣㅔ'), - ('ㅛ', 'ㅣㅗ'), - ('ㅠ', 'ㅣㅜ') -]] - -# List of (Latin alphabet, hangul) pairs: -_latin_to_hangul = [(re.compile('%s' % x[0], re.IGNORECASE), x[1]) for x in [ - ('a', '에이'), - ('b', '비'), - ('c', '시'), - ('d', '디'), - ('e', '이'), - ('f', '에프'), - ('g', '지'), - ('h', '에이치'), - ('i', '아이'), - ('j', '제이'), - ('k', '케이'), - ('l', '엘'), - ('m', '엠'), - ('n', '엔'), - ('o', '오'), - ('p', '피'), - ('q', '큐'), - ('r', '아르'), - ('s', '에스'), - ('t', '티'), - ('u', '유'), - ('v', '브이'), - ('w', '더블유'), - ('x', '엑스'), - ('y', '와이'), - ('z', '제트') -]] - -# List of (ipa, lazy ipa) pairs: -_ipa_to_lazy_ipa = [(re.compile('%s' % x[0], re.IGNORECASE), x[1]) for x in [ - ('t͡ɕ','ʧ'), - ('d͡ʑ','ʥ'), - ('ɲ','n^'), - ('ɕ','ʃ'), - ('ʷ','w'), - ('ɭ','l`'), - ('ʎ','ɾ'), - ('ɣ','ŋ'), - ('ɰ','ɯ'), - ('ʝ','j'), - ('ʌ','ə'), - ('ɡ','g'), - ('\u031a','#'), - ('\u0348','='), - ('\u031e',''), - ('\u0320',''), - ('\u0339','') -]] - - -def latin_to_hangul(text): - for regex, replacement in _latin_to_hangul: - text = re.sub(regex, replacement, text) - return text - - -def divide_hangul(text): - text = j2hcj(h2j(text)) - for regex, replacement in _hangul_divided: - text = re.sub(regex, replacement, text) - return text - - -def hangul_number(num, sino=True): - '''Reference https://github.com/Kyubyong/g2pK''' - num = re.sub(',', '', num) - - if num == '0': - return '영' - if not sino and num == '20': - return '스무' - - digits = '123456789' - names = '일이삼사오육칠팔구' - digit2name = {d: n for d, n in zip(digits, names)} - - modifiers = '한 두 세 네 다섯 여섯 일곱 여덟 아홉' - decimals = '열 스물 서른 마흔 쉰 예순 일흔 여든 아흔' - digit2mod = {d: mod for d, mod in zip(digits, modifiers.split())} - digit2dec = {d: dec for d, dec in zip(digits, decimals.split())} - - spelledout = [] - for i, digit in enumerate(num): - i = len(num) - i - 1 - if sino: - if i == 0: - name = digit2name.get(digit, '') - elif i == 1: - name = digit2name.get(digit, '') + '십' - name = name.replace('일십', '십') - else: - if i == 0: - name = digit2mod.get(digit, '') - elif i == 1: - name = digit2dec.get(digit, '') - if digit == '0': - if i % 4 == 0: - last_three = spelledout[-min(3, len(spelledout)):] - if ''.join(last_three) == '': - spelledout.append('') - continue - else: - spelledout.append('') - continue - if i == 2: - name = digit2name.get(digit, '') + '백' - name = name.replace('일백', '백') - elif i == 3: - name = digit2name.get(digit, '') + '천' - name = name.replace('일천', '천') - elif i == 4: - name = digit2name.get(digit, '') + '만' - name = name.replace('일만', '만') - elif i == 5: - name = digit2name.get(digit, '') + '십' - name = name.replace('일십', '십') - elif i == 6: - name = digit2name.get(digit, '') + '백' - name = name.replace('일백', '백') - elif i == 7: - name = digit2name.get(digit, '') + '천' - name = name.replace('일천', '천') - elif i == 8: - name = digit2name.get(digit, '') + '억' - elif i == 9: - name = digit2name.get(digit, '') + '십' - elif i == 10: - name = digit2name.get(digit, '') + '백' - elif i == 11: - name = digit2name.get(digit, '') + '천' - elif i == 12: - name = digit2name.get(digit, '') + '조' - elif i == 13: - name = digit2name.get(digit, '') + '십' - elif i == 14: - name = digit2name.get(digit, '') + '백' - elif i == 15: - name = digit2name.get(digit, '') + '천' - spelledout.append(name) - return ''.join(elem for elem in spelledout) - - -def number_to_hangul(text): - '''Reference https://github.com/Kyubyong/g2pK''' - tokens = set(re.findall(r'(\d[\d,]*)([\uac00-\ud71f]+)', text)) - for token in tokens: - num, classifier = token - if classifier[:2] in _korean_classifiers or classifier[0] in _korean_classifiers: - spelledout = hangul_number(num, sino=False) - else: - spelledout = hangul_number(num, sino=True) - text = text.replace(f'{num}{classifier}', f'{spelledout}{classifier}') - # digit by digit for remaining digits - digits = '0123456789' - names = '영일이삼사오육칠팔구' - for d, n in zip(digits, names): - text = text.replace(d, n) - return text - - -def korean_to_lazy_ipa(text): - text = latin_to_hangul(text) - text = number_to_hangul(text) - text=re.sub('[\uac00-\ud7af]+',lambda x:ko_pron.romanise(x.group(0),'ipa').split('] ~ [')[0],text) - for regex, replacement in _ipa_to_lazy_ipa: - text = re.sub(regex, replacement, text) - return text - - -def korean_to_ipa(text): - text = korean_to_lazy_ipa(text) - return text.replace('ʧ','tʃ').replace('ʥ','dʑ') diff --git a/spaces/PrabhuKiranKonda/fastapi-postgres-todo-api/Dockerfile b/spaces/PrabhuKiranKonda/fastapi-postgres-todo-api/Dockerfile deleted file mode 100644 index e77833e6f9071d99d973e3742f6fefcc015be058..0000000000000000000000000000000000000000 --- a/spaces/PrabhuKiranKonda/fastapi-postgres-todo-api/Dockerfile +++ /dev/null @@ -1,13 +0,0 @@ -FROM python:3.9-slim -WORKDIR /app -COPY requirements.txt ./requirements.txt -RUN apt-get update \ - && apt-get -y install libpq-dev gcc \ - && pip install psycopg2 -# Install uvicorn -RUN pip install uvicorn -# Install dependencies -RUN pip install -r requirements.txt -COPY . /app -ENTRYPOINT ["uvicorn", "main:app"] -CMD ["--host", "0.0.0.0", "--port", "7860"] \ No newline at end of file diff --git a/spaces/RMXK/RVC_HFF/lib/infer_pack/modules/F0Predictor/PMF0Predictor.py b/spaces/RMXK/RVC_HFF/lib/infer_pack/modules/F0Predictor/PMF0Predictor.py deleted file mode 100644 index b2c592527a5966e6f8e79e8c52dc5b414246dcc6..0000000000000000000000000000000000000000 --- a/spaces/RMXK/RVC_HFF/lib/infer_pack/modules/F0Predictor/PMF0Predictor.py +++ /dev/null @@ -1,97 +0,0 @@ -from lib.infer_pack.modules.F0Predictor.F0Predictor import F0Predictor -import parselmouth -import numpy as np - - -class PMF0Predictor(F0Predictor): - def __init__(self, hop_length=512, f0_min=50, f0_max=1100, sampling_rate=44100): - self.hop_length = hop_length - self.f0_min = f0_min - self.f0_max = f0_max - self.sampling_rate = sampling_rate - - def interpolate_f0(self, f0): - """ - 对F0进行插值处理 - """ - - data = np.reshape(f0, (f0.size, 1)) - - vuv_vector = np.zeros((data.size, 1), dtype=np.float32) - vuv_vector[data > 0.0] = 1.0 - vuv_vector[data <= 0.0] = 0.0 - - ip_data = data - - frame_number = data.size - last_value = 0.0 - for i in range(frame_number): - if data[i] <= 0.0: - j = i + 1 - for j in range(i + 1, frame_number): - if data[j] > 0.0: - break - if j < frame_number - 1: - if last_value > 0.0: - step = (data[j] - data[i - 1]) / float(j - i) - for k in range(i, j): - ip_data[k] = data[i - 1] + step * (k - i + 1) - else: - for k in range(i, j): - ip_data[k] = data[j] - else: - for k in range(i, frame_number): - ip_data[k] = last_value - else: - ip_data[i] = data[i] # 这里可能存在一个没有必要的拷贝 - last_value = data[i] - - return ip_data[:, 0], vuv_vector[:, 0] - - def compute_f0(self, wav, p_len=None): - x = wav - if p_len is None: - p_len = x.shape[0] // self.hop_length - else: - assert abs(p_len - x.shape[0] // self.hop_length) < 4, "pad length error" - time_step = self.hop_length / self.sampling_rate * 1000 - f0 = ( - parselmouth.Sound(x, self.sampling_rate) - .to_pitch_ac( - time_step=time_step / 1000, - voicing_threshold=0.6, - pitch_floor=self.f0_min, - pitch_ceiling=self.f0_max, - ) - .selected_array["frequency"] - ) - - pad_size = (p_len - len(f0) + 1) // 2 - if pad_size > 0 or p_len - len(f0) - pad_size > 0: - f0 = np.pad(f0, [[pad_size, p_len - len(f0) - pad_size]], mode="constant") - f0, uv = self.interpolate_f0(f0) - return f0 - - def compute_f0_uv(self, wav, p_len=None): - x = wav - if p_len is None: - p_len = x.shape[0] // self.hop_length - else: - assert abs(p_len - x.shape[0] // self.hop_length) < 4, "pad length error" - time_step = self.hop_length / self.sampling_rate * 1000 - f0 = ( - parselmouth.Sound(x, self.sampling_rate) - .to_pitch_ac( - time_step=time_step / 1000, - voicing_threshold=0.6, - pitch_floor=self.f0_min, - pitch_ceiling=self.f0_max, - ) - .selected_array["frequency"] - ) - - pad_size = (p_len - len(f0) + 1) // 2 - if pad_size > 0 or p_len - len(f0) - pad_size > 0: - f0 = np.pad(f0, [[pad_size, p_len - len(f0) - pad_size]], mode="constant") - f0, uv = self.interpolate_f0(f0) - return f0, uv diff --git a/spaces/Rashid2026/Course-Recommender/app.py b/spaces/Rashid2026/Course-Recommender/app.py deleted file mode 100644 index c32851f0c2382cf4f02d78d30cc92bc8eef2e96e..0000000000000000000000000000000000000000 --- a/spaces/Rashid2026/Course-Recommender/app.py +++ /dev/null @@ -1,2 +0,0 @@ - -print("Hello world! This is an API") \ No newline at end of file diff --git a/spaces/Realcat/image-matching-webui/hloc/pipelines/4Seasons/localize.py b/spaces/Realcat/image-matching-webui/hloc/pipelines/4Seasons/localize.py deleted file mode 100644 index 0451130bceef1bcb6c3cba0ab74fcaa4645e1f3a..0000000000000000000000000000000000000000 --- a/spaces/Realcat/image-matching-webui/hloc/pipelines/4Seasons/localize.py +++ /dev/null @@ -1,86 +0,0 @@ -from pathlib import Path -import argparse - -from ... import extract_features, match_features, localize_sfm, logger -from .utils import get_timestamps, delete_unused_images -from .utils import generate_query_lists, generate_localization_pairs -from .utils import prepare_submission, evaluate_submission - -relocalization_files = { - "training": "RelocalizationFilesTrain//relocalizationFile_recording_2020-03-24_17-36-22.txt", - "validation": "RelocalizationFilesVal/relocalizationFile_recording_2020-03-03_12-03-23.txt", - "test0": "RelocalizationFilesTest/relocalizationFile_recording_2020-03-24_17-45-31_*.txt", - "test1": "RelocalizationFilesTest/relocalizationFile_recording_2020-04-23_19-37-00_*.txt", -} - -parser = argparse.ArgumentParser() -parser.add_argument( - "--sequence", - type=str, - required=True, - choices=["training", "validation", "test0", "test1"], - help="Sequence to be relocalized.", -) -parser.add_argument( - "--dataset", - type=Path, - default="datasets/4Seasons", - help="Path to the dataset, default: %(default)s", -) -parser.add_argument( - "--outputs", - type=Path, - default="outputs/4Seasons", - help="Path to the output directory, default: %(default)s", -) -args = parser.parse_args() -sequence = args.sequence - -data_dir = args.dataset -ref_dir = data_dir / "reference" -assert ref_dir.exists(), f"{ref_dir} does not exist" -seq_dir = data_dir / sequence -assert seq_dir.exists(), f"{seq_dir} does not exist" -seq_images = seq_dir / "undistorted_images" -reloc = ref_dir / relocalization_files[sequence] - -output_dir = args.outputs -output_dir.mkdir(exist_ok=True, parents=True) -query_list = output_dir / f"{sequence}_queries_with_intrinsics.txt" -ref_pairs = output_dir / "pairs-db-dist20.txt" -ref_sfm = output_dir / "sfm_superpoint+superglue" -results_path = output_dir / f"localization_{sequence}_hloc+superglue.txt" -submission_dir = output_dir / "submission_hloc+superglue" - -num_loc_pairs = 10 -loc_pairs = output_dir / f"pairs-query-{sequence}-dist{num_loc_pairs}.txt" - -fconf = extract_features.confs["superpoint_max"] -mconf = match_features.confs["superglue"] - -# Not all query images that are used for the evaluation -# To save time in feature extraction, we delete unsused images. -timestamps = get_timestamps(reloc, 1) -delete_unused_images(seq_images, timestamps) - -# Generate a list of query images with their intrinsics. -generate_query_lists(timestamps, seq_dir, query_list) - -# Generate the localization pairs from the given reference frames. -generate_localization_pairs( - sequence, reloc, num_loc_pairs, ref_pairs, loc_pairs -) - -# Extract, match, amd localize. -ffile = extract_features.main(fconf, seq_images, output_dir) -mfile = match_features.main(mconf, loc_pairs, fconf["output"], output_dir) -localize_sfm.main(ref_sfm, query_list, loc_pairs, ffile, mfile, results_path) - -# Convert the absolute poses to relative poses with the reference frames. -submission_dir.mkdir(exist_ok=True) -prepare_submission(results_path, reloc, ref_dir / "poses.txt", submission_dir) - -# If not a test sequence: evaluation the localization accuracy -if "test" not in sequence: - logger.info("Evaluating the relocalization submission...") - evaluate_submission(submission_dir, reloc) diff --git a/spaces/Realcat/image-matching-webui/third_party/GlueStick/gluestick/__init__.py b/spaces/Realcat/image-matching-webui/third_party/GlueStick/gluestick/__init__.py deleted file mode 100644 index 4eaf01e90440afeb485a4743f181dac348ede63d..0000000000000000000000000000000000000000 --- a/spaces/Realcat/image-matching-webui/third_party/GlueStick/gluestick/__init__.py +++ /dev/null @@ -1,55 +0,0 @@ -import collections.abc as collections -from pathlib import Path - -import torch - -GLUESTICK_ROOT = Path(__file__).parent.parent - - -def get_class(mod_name, base_path, BaseClass): - """Get the class object which inherits from BaseClass and is defined in - the module named mod_name, child of base_path. - """ - import inspect - - mod_path = "{}.{}".format(base_path, mod_name) - mod = __import__(mod_path, fromlist=[""]) - classes = inspect.getmembers(mod, inspect.isclass) - # Filter classes defined in the module - classes = [c for c in classes if c[1].__module__ == mod_path] - # Filter classes inherited from BaseModel - classes = [c for c in classes if issubclass(c[1], BaseClass)] - assert len(classes) == 1, classes - return classes[0][1] - - -def get_model(name): - from .models.base_model import BaseModel - - return get_class("models." + name, __name__, BaseModel) - - -def numpy_image_to_torch(image): - """Normalize the image tensor and reorder the dimensions.""" - if image.ndim == 3: - image = image.transpose((2, 0, 1)) # HxWxC to CxHxW - elif image.ndim == 2: - image = image[None] # add channel axis - else: - raise ValueError(f"Not an image: {image.shape}") - return torch.from_numpy(image / 255.0).float() - - -def map_tensor(input_, func): - if isinstance(input_, (str, bytes)): - return input_ - elif isinstance(input_, collections.Mapping): - return {k: map_tensor(sample, func) for k, sample in input_.items()} - elif isinstance(input_, collections.Sequence): - return [map_tensor(sample, func) for sample in input_] - else: - return func(input_) - - -def batch_to_np(batch): - return map_tensor(batch, lambda t: t.detach().cpu().numpy()[0]) diff --git a/spaces/Redgon/bingo/src/components/ui/voice/index.tsx b/spaces/Redgon/bingo/src/components/ui/voice/index.tsx deleted file mode 100644 index 4adcb632226bfced8b97092782811edf08b56569..0000000000000000000000000000000000000000 --- a/spaces/Redgon/bingo/src/components/ui/voice/index.tsx +++ /dev/null @@ -1,28 +0,0 @@ -import './index.scss' - -export interface VoiceProps extends CSSPropertyRule { - num?: number; - duration?: number; -} -export default function Voice({ duration = 400, num = 7, ...others }) { - return ( -
- {Array.from({ length: num }).map((_, index) => { - const randomDuration = Math.random() * 100 + duration - const initialDelay = Math.random() * 2 * duration - const initialScale = Math.sin((index + 1) * Math.PI / num) - return ( -
- ) - })} -
- ) -} diff --git a/spaces/Reeve/Ohayou_Face/models/__init__.py b/spaces/Reeve/Ohayou_Face/models/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/Reself/StableVideo/ldm/modules/midas/utils.py b/spaces/Reself/StableVideo/ldm/modules/midas/utils.py deleted file mode 100644 index 9a9d3b5b66370fa98da9e067ba53ead848ea9a59..0000000000000000000000000000000000000000 --- a/spaces/Reself/StableVideo/ldm/modules/midas/utils.py +++ /dev/null @@ -1,189 +0,0 @@ -"""Utils for monoDepth.""" -import sys -import re -import numpy as np -import cv2 -import torch - - -def read_pfm(path): - """Read pfm file. - - Args: - path (str): path to file - - Returns: - tuple: (data, scale) - """ - with open(path, "rb") as file: - - color = None - width = None - height = None - scale = None - endian = None - - header = file.readline().rstrip() - if header.decode("ascii") == "PF": - color = True - elif header.decode("ascii") == "Pf": - color = False - else: - raise Exception("Not a PFM file: " + path) - - dim_match = re.match(r"^(\d+)\s(\d+)\s$", file.readline().decode("ascii")) - if dim_match: - width, height = list(map(int, dim_match.groups())) - else: - raise Exception("Malformed PFM header.") - - scale = float(file.readline().decode("ascii").rstrip()) - if scale < 0: - # little-endian - endian = "<" - scale = -scale - else: - # big-endian - endian = ">" - - data = np.fromfile(file, endian + "f") - shape = (height, width, 3) if color else (height, width) - - data = np.reshape(data, shape) - data = np.flipud(data) - - return data, scale - - -def write_pfm(path, image, scale=1): - """Write pfm file. - - Args: - path (str): pathto file - image (array): data - scale (int, optional): Scale. Defaults to 1. - """ - - with open(path, "wb") as file: - color = None - - if image.dtype.name != "float32": - raise Exception("Image dtype must be float32.") - - image = np.flipud(image) - - if len(image.shape) == 3 and image.shape[2] == 3: # color image - color = True - elif ( - len(image.shape) == 2 or len(image.shape) == 3 and image.shape[2] == 1 - ): # greyscale - color = False - else: - raise Exception("Image must have H x W x 3, H x W x 1 or H x W dimensions.") - - file.write("PF\n" if color else "Pf\n".encode()) - file.write("%d %d\n".encode() % (image.shape[1], image.shape[0])) - - endian = image.dtype.byteorder - - if endian == "<" or endian == "=" and sys.byteorder == "little": - scale = -scale - - file.write("%f\n".encode() % scale) - - image.tofile(file) - - -def read_image(path): - """Read image and output RGB image (0-1). - - Args: - path (str): path to file - - Returns: - array: RGB image (0-1) - """ - img = cv2.imread(path) - - if img.ndim == 2: - img = cv2.cvtColor(img, cv2.COLOR_GRAY2BGR) - - img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) / 255.0 - - return img - - -def resize_image(img): - """Resize image and make it fit for network. - - Args: - img (array): image - - Returns: - tensor: data ready for network - """ - height_orig = img.shape[0] - width_orig = img.shape[1] - - if width_orig > height_orig: - scale = width_orig / 384 - else: - scale = height_orig / 384 - - height = (np.ceil(height_orig / scale / 32) * 32).astype(int) - width = (np.ceil(width_orig / scale / 32) * 32).astype(int) - - img_resized = cv2.resize(img, (width, height), interpolation=cv2.INTER_AREA) - - img_resized = ( - torch.from_numpy(np.transpose(img_resized, (2, 0, 1))).contiguous().float() - ) - img_resized = img_resized.unsqueeze(0) - - return img_resized - - -def resize_depth(depth, width, height): - """Resize depth map and bring to CPU (numpy). - - Args: - depth (tensor): depth - width (int): image width - height (int): image height - - Returns: - array: processed depth - """ - depth = torch.squeeze(depth[0, :, :, :]).to("cpu") - - depth_resized = cv2.resize( - depth.numpy(), (width, height), interpolation=cv2.INTER_CUBIC - ) - - return depth_resized - -def write_depth(path, depth, bits=1): - """Write depth map to pfm and png file. - - Args: - path (str): filepath without extension - depth (array): depth - """ - write_pfm(path + ".pfm", depth.astype(np.float32)) - - depth_min = depth.min() - depth_max = depth.max() - - max_val = (2**(8*bits))-1 - - if depth_max - depth_min > np.finfo("float").eps: - out = max_val * (depth - depth_min) / (depth_max - depth_min) - else: - out = np.zeros(depth.shape, dtype=depth.type) - - if bits == 1: - cv2.imwrite(path + ".png", out.astype("uint8")) - elif bits == 2: - cv2.imwrite(path + ".png", out.astype("uint16")) - - return diff --git a/spaces/Ritori/TTS_Yui/Yue_gradio.py b/spaces/Ritori/TTS_Yui/Yue_gradio.py deleted file mode 100644 index 3bb55e7727f250d210ee6bfe2b958a7e05434a70..0000000000000000000000000000000000000000 --- a/spaces/Ritori/TTS_Yui/Yue_gradio.py +++ /dev/null @@ -1,243 +0,0 @@ -#好用的 - -import os -os.system('pip install -U tensorflow') -os.system('pip install -q unidecode tensorboardX') -os.system('pip install librosa==0.8.0') -os.system('pip install pysoundfile==0.9.0.post1') -os.system('pip install unidecode==1.3.4') -os.system('pip install pyopenjtalk --no-build-isolation') -os.system('pip install inflect==5.6.2') -os.system('pip install janome==0.4.2') -os.system('pip install tqdm -q') -os.system('pip install gdown') -os.system('pip install -q librosa unidecode') - -os.system('pip install ipython') -os.system('pip install --upgrade jupyter ipywidgets') -os.system('jupyter nbextension enable --py widgetsnbextension') -os.system('pip uninstall tqdm') -os.system('pip install tqdm') - -import time -import pyopenjtalk -import soundfile as sf -import gradio as gr -import torch -import IPython.display as ipd -import numpy as np -import torch -import json -from hparams import create_hparams -from model import Tacotron2 -from layers import TacotronSTFT -from audio_processing import griffin_lim -from text import text_to_sequence -from env import AttrDict -from meldataset import MAX_WAV_VALUE -from models import Generator - -#@,tlitle 配置并运行 - -#国际 HiFi-GAN 模型(有点机器音): 1qpgI41wNXFcH-iKq1Y42JlBC9j0je8PW -#@markdown 你训练好的tacotron2模型的路径填在`Tacotron2_Model`这里 -Tacotron2_Model = '/content/Yui_TrapGenesis'#@param {type:"string"} -TACOTRON2_ID = Tacotron2_Model -HIFIGAN_ID = "1qpgI41wNXFcH-iKq1Y42JlBC9j0je8PW" -#@markdown 选择预处理文本的cleaner -text_cleaner = 'japanese_phrase_cleaners'#@param {type:"string"} -import pyopenjtalk -import soundfile as sf -import gradio as gr - -# 全局变量声明 -model = None -hparams = None -hifigan = None -thisdict = None -pronounciation_dictionary = False -show_graphs = False # 添加show_graphs变量,并赋予默认值 - -# 初始化函数 -def initialize(): - global model, hparams, hifigan, thisdict, pronounciation_dictionary - - # 检查是否已初始化 - try: - initialized - except NameError: - print("Setting up, please wait.\n") - - from tqdm.notebook import tqdm - with tqdm(total=5, leave=False) as pbar: - import os - from os.path import exists, join, basename, splitext - git_repo_url = 'https://github.com/CjangCjengh/tacotron2-japanese.git' - project_name = splitext(basename(git_repo_url))[0] - if not exists(project_name): - # clone and install - os.system('git clone -q --recursive {git_repo_url}') - os.system('git clone -q --recursive https://github.com/SortAnon/hifi-gan') - - pbar.update(1) # downloaded TT2 and HiFi-GAN - import sys - sys.path.append('hifi-gan') - sys.path.append(project_name) - import time - import matplotlib - import matplotlib.pylab as plt - import gdown - d = 'https://drive.google.com/uc?id=' - - # %matplotlib inline - import IPython.display as ipd - import numpy as np - import torch - import json - from hparams import create_hparams - from model import Tacotron2 - from layers import TacotronSTFT - from audio_processing import griffin_lim - from text import text_to_sequence - from env import AttrDict - from meldataset import MAX_WAV_VALUE - from models import Generator - - pbar.update(1) # initialized Dependancies - - graph_width = 900 - graph_height = 360 - def plot_data(data, figsize=(int(graph_width/100), int(graph_height/100))): - # %matplotlib inline - fig, axes = plt.subplots(1, len(data), figsize=figsize) - for i in range(len(data)): - axes[i].imshow(data[i], aspect='auto', origin='upper', - interpolation='none', cmap='inferno') - fig.canvas.draw() - plt.show() - - # Setup Pronounciation Dictionary - os.system('wget https://github.com/wind4000/tacotron2/releases/download/v0.2/merged.dict.txt') - thisdict = {} - for line in reversed((open('merged.dict.txt', "r").read()).splitlines()): - thisdict[(line.split(" ",1))[0]] = (line.split(" ",1))[1].strip() - - pbar.update(1) # Downloaded and Set up Pronounciation Dictionary - - def ARPA(text, punctuation=r"!?,.;", EOS_Token=True): - out = '' - for word_ in text.split(" "): - word=word_; end_chars = '' - while any(elem in word for elem in punctuation) and len(word) > 1: - if word[-1] in punctuation: end_chars = word[-1] + end_chars; word = word[:-1] - else: break - try: - word_arpa = thisdict[word.upper()] - word = "{" + str(word_arpa) + "}" - except KeyError: pass - out = (out + " " + word + end_chars).strip() - if EOS_Token and out[-1] != ";": out += ";" - return out - - def get_hifigan(MODEL_ID): - # Download HiFi-GAN - hifigan_pretrained_model = 'hifimodel' - gdown.download(d+MODEL_ID, hifigan_pretrained_model, quiet=False) - if not exists(hifigan_pretrained_model): - raise Exception("HiFI-GAN model failed to download!") - - # Load HiFi-GAN - conf = os.path.join("hifi-gan", "config_v1.json") - with open(conf) as f: - json_config = json.loads(f.read()) - h = AttrDict(json_config) - torch.manual_seed(h.seed) - hifigan = Generator(h).to(torch.device("cuda")) - state_dict_g = torch.load(hifigan_pretrained_model, map_location=torch.device("cuda")) - hifigan.load_state_dict(state_dict_g["generator"]) - hifigan.eval() - hifigan.remove_weight_norm() - return hifigan, h - - hifigan, h = get_hifigan(HIFIGAN_ID) - pbar.update(1) # Downloaded and Set up HiFi-GAN - - def has_MMI(STATE_DICT): - return any(True for x in STATE_DICT.keys() if "mi." in x) - - def get_Tactron2(MODEL_ID): - # Download Tacotron2 - tacotron2_pretrained_model = TACOTRON2_ID - if not exists(tacotron2_pretrained_model): - raise Exception("Tacotron2 model failed to download!") - # Load Tacotron2 and Config - hparams = create_hparams() - hparams.sampling_rate = 22050 - hparams.max_decoder_steps = 2000 # Max Duration - hparams.gate_threshold = 0.80 # Model must be 25% sure the clip is over before ending generation - model = Tacotron2(hparams) - state_dict = torch.load(tacotron2_pretrained_model)['state_dict'] - if has_MMI(state_dict): - raise Exception("ERROR: This notebook does not currently support MMI models.") - model.load_state_dict(state_dict) - _ = model.cuda().eval().half() - return model, hparams - - model, hparams = get_Tactron2(TACOTRON2_ID) - previous_tt2_id = TACOTRON2_ID - - pbar.update(1) # Downloaded and Set up Tacotron2 - - # 初始化 -initialize() - -import soundfile as sf - -def end_to_end_infer(text, pronounciation_dictionary, show_graphs): - audio = None # 定义一个变量用于存储音频数据 - for i in [x for x in text.split("\n") if len(x)]: - if not pronounciation_dictionary: - if i[-1] != ";": - i = i + ";" - else: - i = ARPA(i) - with torch.no_grad(): - sequence = np.array(text_to_sequence(i, [text_cleaner]))[None, :] - sequence = torch.autograd.Variable(torch.from_numpy(sequence)).cuda().long() - mel_outputs, mel_outputs_postnet, _, alignments = model.inference(sequence) - if show_graphs: - plot_data((mel_outputs_postnet.float().data.cpu().numpy()[0], - alignments.float().data.cpu().numpy()[0].T)) - y_g_hat = hifigan(mel_outputs_postnet.float()) - audio = y_g_hat.squeeze() - audio = audio * MAX_WAV_VALUE - output_filename = f"output_{time.strftime('%Y%m%d%H%M%S')}.wav" - sf.write(output_filename, audio.cpu().numpy().astype('int16'), hparams.sampling_rate) - print(f"音频已保存为 {output_filename}") - print("") - ipd.display(ipd.Audio(audio.cpu().numpy().astype("int16"), rate=hparams.sampling_rate)) - return audio # 返回音频数据 - -# 文本到语音转换函数 -def text_to_speech(text, max_decoder_steps=2000, gate_threshold=0.5): - global model, hparams, hifigan, thisdict, pronounciation_dictionary, show_graphs - - hparams.max_decoder_steps = max_decoder_steps - hparams.gate_threshold = gate_threshold - output_filename = f"output_{time.strftime('%Y%m%d%H%M%S')}.wav" - audio = end_to_end_infer(text, pronounciation_dictionary, show_graphs) - if audio is not None: - sf.write(output_filename, audio.cpu().numpy().astype('int16'), hparams.sampling_rate) - return output_filename - else: - return None - -# Gradio界面 -inputs = [ - gr.inputs.Textbox(lines=3, label="输入文本"), - gr.inputs.Slider(minimum=100, maximum=5000, default=2000, step=100, label="最大解码步数"), - gr.inputs.Slider(minimum=0.0, maximum=1.0, default=0.5, step=0.05, label="门控阈值") -] -outputs = gr.outputs.File(label="下载生成的音频") - -gr.Interface(fn=text_to_speech, inputs=inputs, outputs=outputs).launch(debug=True,share=True) \ No newline at end of file diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmcv/runner/optimizer/default_constructor.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmcv/runner/optimizer/default_constructor.py deleted file mode 100644 index 2c0da3503b75441738efe38d70352b55a210a34a..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmcv/runner/optimizer/default_constructor.py +++ /dev/null @@ -1,249 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings - -import torch -from torch.nn import GroupNorm, LayerNorm - -from annotator.uniformer.mmcv.utils import _BatchNorm, _InstanceNorm, build_from_cfg, is_list_of -from annotator.uniformer.mmcv.utils.ext_loader import check_ops_exist -from .builder import OPTIMIZER_BUILDERS, OPTIMIZERS - - -@OPTIMIZER_BUILDERS.register_module() -class DefaultOptimizerConstructor: - """Default constructor for optimizers. - - By default each parameter share the same optimizer settings, and we - provide an argument ``paramwise_cfg`` to specify parameter-wise settings. - It is a dict and may contain the following fields: - - - ``custom_keys`` (dict): Specified parameters-wise settings by keys. If - one of the keys in ``custom_keys`` is a substring of the name of one - parameter, then the setting of the parameter will be specified by - ``custom_keys[key]`` and other setting like ``bias_lr_mult`` etc. will - be ignored. It should be noted that the aforementioned ``key`` is the - longest key that is a substring of the name of the parameter. If there - are multiple matched keys with the same length, then the key with lower - alphabet order will be chosen. - ``custom_keys[key]`` should be a dict and may contain fields ``lr_mult`` - and ``decay_mult``. See Example 2 below. - - ``bias_lr_mult`` (float): It will be multiplied to the learning - rate for all bias parameters (except for those in normalization - layers and offset layers of DCN). - - ``bias_decay_mult`` (float): It will be multiplied to the weight - decay for all bias parameters (except for those in - normalization layers, depthwise conv layers, offset layers of DCN). - - ``norm_decay_mult`` (float): It will be multiplied to the weight - decay for all weight and bias parameters of normalization - layers. - - ``dwconv_decay_mult`` (float): It will be multiplied to the weight - decay for all weight and bias parameters of depthwise conv - layers. - - ``dcn_offset_lr_mult`` (float): It will be multiplied to the learning - rate for parameters of offset layer in the deformable convs - of a model. - - ``bypass_duplicate`` (bool): If true, the duplicate parameters - would not be added into optimizer. Default: False. - - Note: - 1. If the option ``dcn_offset_lr_mult`` is used, the constructor will - override the effect of ``bias_lr_mult`` in the bias of offset - layer. So be careful when using both ``bias_lr_mult`` and - ``dcn_offset_lr_mult``. If you wish to apply both of them to the - offset layer in deformable convs, set ``dcn_offset_lr_mult`` - to the original ``dcn_offset_lr_mult`` * ``bias_lr_mult``. - 2. If the option ``dcn_offset_lr_mult`` is used, the constructor will - apply it to all the DCN layers in the model. So be careful when - the model contains multiple DCN layers in places other than - backbone. - - Args: - model (:obj:`nn.Module`): The model with parameters to be optimized. - optimizer_cfg (dict): The config dict of the optimizer. - Positional fields are - - - `type`: class name of the optimizer. - - Optional fields are - - - any arguments of the corresponding optimizer type, e.g., - lr, weight_decay, momentum, etc. - paramwise_cfg (dict, optional): Parameter-wise options. - - Example 1: - >>> model = torch.nn.modules.Conv1d(1, 1, 1) - >>> optimizer_cfg = dict(type='SGD', lr=0.01, momentum=0.9, - >>> weight_decay=0.0001) - >>> paramwise_cfg = dict(norm_decay_mult=0.) - >>> optim_builder = DefaultOptimizerConstructor( - >>> optimizer_cfg, paramwise_cfg) - >>> optimizer = optim_builder(model) - - Example 2: - >>> # assume model have attribute model.backbone and model.cls_head - >>> optimizer_cfg = dict(type='SGD', lr=0.01, weight_decay=0.95) - >>> paramwise_cfg = dict(custom_keys={ - '.backbone': dict(lr_mult=0.1, decay_mult=0.9)}) - >>> optim_builder = DefaultOptimizerConstructor( - >>> optimizer_cfg, paramwise_cfg) - >>> optimizer = optim_builder(model) - >>> # Then the `lr` and `weight_decay` for model.backbone is - >>> # (0.01 * 0.1, 0.95 * 0.9). `lr` and `weight_decay` for - >>> # model.cls_head is (0.01, 0.95). - """ - - def __init__(self, optimizer_cfg, paramwise_cfg=None): - if not isinstance(optimizer_cfg, dict): - raise TypeError('optimizer_cfg should be a dict', - f'but got {type(optimizer_cfg)}') - self.optimizer_cfg = optimizer_cfg - self.paramwise_cfg = {} if paramwise_cfg is None else paramwise_cfg - self.base_lr = optimizer_cfg.get('lr', None) - self.base_wd = optimizer_cfg.get('weight_decay', None) - self._validate_cfg() - - def _validate_cfg(self): - if not isinstance(self.paramwise_cfg, dict): - raise TypeError('paramwise_cfg should be None or a dict, ' - f'but got {type(self.paramwise_cfg)}') - - if 'custom_keys' in self.paramwise_cfg: - if not isinstance(self.paramwise_cfg['custom_keys'], dict): - raise TypeError( - 'If specified, custom_keys must be a dict, ' - f'but got {type(self.paramwise_cfg["custom_keys"])}') - if self.base_wd is None: - for key in self.paramwise_cfg['custom_keys']: - if 'decay_mult' in self.paramwise_cfg['custom_keys'][key]: - raise ValueError('base_wd should not be None') - - # get base lr and weight decay - # weight_decay must be explicitly specified if mult is specified - if ('bias_decay_mult' in self.paramwise_cfg - or 'norm_decay_mult' in self.paramwise_cfg - or 'dwconv_decay_mult' in self.paramwise_cfg): - if self.base_wd is None: - raise ValueError('base_wd should not be None') - - def _is_in(self, param_group, param_group_list): - assert is_list_of(param_group_list, dict) - param = set(param_group['params']) - param_set = set() - for group in param_group_list: - param_set.update(set(group['params'])) - - return not param.isdisjoint(param_set) - - def add_params(self, params, module, prefix='', is_dcn_module=None): - """Add all parameters of module to the params list. - - The parameters of the given module will be added to the list of param - groups, with specific rules defined by paramwise_cfg. - - Args: - params (list[dict]): A list of param groups, it will be modified - in place. - module (nn.Module): The module to be added. - prefix (str): The prefix of the module - is_dcn_module (int|float|None): If the current module is a - submodule of DCN, `is_dcn_module` will be passed to - control conv_offset layer's learning rate. Defaults to None. - """ - # get param-wise options - custom_keys = self.paramwise_cfg.get('custom_keys', {}) - # first sort with alphabet order and then sort with reversed len of str - sorted_keys = sorted(sorted(custom_keys.keys()), key=len, reverse=True) - - bias_lr_mult = self.paramwise_cfg.get('bias_lr_mult', 1.) - bias_decay_mult = self.paramwise_cfg.get('bias_decay_mult', 1.) - norm_decay_mult = self.paramwise_cfg.get('norm_decay_mult', 1.) - dwconv_decay_mult = self.paramwise_cfg.get('dwconv_decay_mult', 1.) - bypass_duplicate = self.paramwise_cfg.get('bypass_duplicate', False) - dcn_offset_lr_mult = self.paramwise_cfg.get('dcn_offset_lr_mult', 1.) - - # special rules for norm layers and depth-wise conv layers - is_norm = isinstance(module, - (_BatchNorm, _InstanceNorm, GroupNorm, LayerNorm)) - is_dwconv = ( - isinstance(module, torch.nn.Conv2d) - and module.in_channels == module.groups) - - for name, param in module.named_parameters(recurse=False): - param_group = {'params': [param]} - if not param.requires_grad: - params.append(param_group) - continue - if bypass_duplicate and self._is_in(param_group, params): - warnings.warn(f'{prefix} is duplicate. It is skipped since ' - f'bypass_duplicate={bypass_duplicate}') - continue - # if the parameter match one of the custom keys, ignore other rules - is_custom = False - for key in sorted_keys: - if key in f'{prefix}.{name}': - is_custom = True - lr_mult = custom_keys[key].get('lr_mult', 1.) - param_group['lr'] = self.base_lr * lr_mult - if self.base_wd is not None: - decay_mult = custom_keys[key].get('decay_mult', 1.) - param_group['weight_decay'] = self.base_wd * decay_mult - break - - if not is_custom: - # bias_lr_mult affects all bias parameters - # except for norm.bias dcn.conv_offset.bias - if name == 'bias' and not (is_norm or is_dcn_module): - param_group['lr'] = self.base_lr * bias_lr_mult - - if (prefix.find('conv_offset') != -1 and is_dcn_module - and isinstance(module, torch.nn.Conv2d)): - # deal with both dcn_offset's bias & weight - param_group['lr'] = self.base_lr * dcn_offset_lr_mult - - # apply weight decay policies - if self.base_wd is not None: - # norm decay - if is_norm: - param_group[ - 'weight_decay'] = self.base_wd * norm_decay_mult - # depth-wise conv - elif is_dwconv: - param_group[ - 'weight_decay'] = self.base_wd * dwconv_decay_mult - # bias lr and decay - elif name == 'bias' and not is_dcn_module: - # TODO: current bias_decay_mult will have affect on DCN - param_group[ - 'weight_decay'] = self.base_wd * bias_decay_mult - params.append(param_group) - - if check_ops_exist(): - from annotator.uniformer.mmcv.ops import DeformConv2d, ModulatedDeformConv2d - is_dcn_module = isinstance(module, - (DeformConv2d, ModulatedDeformConv2d)) - else: - is_dcn_module = False - for child_name, child_mod in module.named_children(): - child_prefix = f'{prefix}.{child_name}' if prefix else child_name - self.add_params( - params, - child_mod, - prefix=child_prefix, - is_dcn_module=is_dcn_module) - - def __call__(self, model): - if hasattr(model, 'module'): - model = model.module - - optimizer_cfg = self.optimizer_cfg.copy() - # if no paramwise option is specified, just use the global setting - if not self.paramwise_cfg: - optimizer_cfg['params'] = model.parameters() - return build_from_cfg(optimizer_cfg, OPTIMIZERS) - - # set param-wise lr and weight decay recursively - params = [] - self.add_params(params, model) - optimizer_cfg['params'] = params - - return build_from_cfg(optimizer_cfg, OPTIMIZERS) diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/core/bbox/coder/base_bbox_coder.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/core/bbox/coder/base_bbox_coder.py deleted file mode 100644 index cf0b34c7cc2fe561718b0c884990beb40a993643..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/core/bbox/coder/base_bbox_coder.py +++ /dev/null @@ -1,17 +0,0 @@ -from abc import ABCMeta, abstractmethod - - -class BaseBBoxCoder(metaclass=ABCMeta): - """Base bounding box coder.""" - - def __init__(self, **kwargs): - pass - - @abstractmethod - def encode(self, bboxes, gt_bboxes): - """Encode deltas between bboxes and ground truth boxes.""" - - @abstractmethod - def decode(self, bboxes, bboxes_pred): - """Decode the predicted bboxes according to prediction and base - boxes.""" diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/core/bbox/samplers/random_sampler.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/core/bbox/samplers/random_sampler.py deleted file mode 100644 index f34b006e8bb0b55c74aa1c3b792f3664ada93162..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/core/bbox/samplers/random_sampler.py +++ /dev/null @@ -1,78 +0,0 @@ -import torch - -from ..builder import BBOX_SAMPLERS -from .base_sampler import BaseSampler - - -@BBOX_SAMPLERS.register_module() -class RandomSampler(BaseSampler): - """Random sampler. - - Args: - num (int): Number of samples - pos_fraction (float): Fraction of positive samples - neg_pos_up (int, optional): Upper bound number of negative and - positive samples. Defaults to -1. - add_gt_as_proposals (bool, optional): Whether to add ground truth - boxes as proposals. Defaults to True. - """ - - def __init__(self, - num, - pos_fraction, - neg_pos_ub=-1, - add_gt_as_proposals=True, - **kwargs): - from mmdet.core.bbox import demodata - super(RandomSampler, self).__init__(num, pos_fraction, neg_pos_ub, - add_gt_as_proposals) - self.rng = demodata.ensure_rng(kwargs.get('rng', None)) - - def random_choice(self, gallery, num): - """Random select some elements from the gallery. - - If `gallery` is a Tensor, the returned indices will be a Tensor; - If `gallery` is a ndarray or list, the returned indices will be a - ndarray. - - Args: - gallery (Tensor | ndarray | list): indices pool. - num (int): expected sample num. - - Returns: - Tensor or ndarray: sampled indices. - """ - assert len(gallery) >= num - - is_tensor = isinstance(gallery, torch.Tensor) - if not is_tensor: - if torch.cuda.is_available(): - device = torch.cuda.current_device() - else: - device = 'cpu' - gallery = torch.tensor(gallery, dtype=torch.long, device=device) - perm = torch.randperm(gallery.numel(), device=gallery.device)[:num] - rand_inds = gallery[perm] - if not is_tensor: - rand_inds = rand_inds.cpu().numpy() - return rand_inds - - def _sample_pos(self, assign_result, num_expected, **kwargs): - """Randomly sample some positive samples.""" - pos_inds = torch.nonzero(assign_result.gt_inds > 0, as_tuple=False) - if pos_inds.numel() != 0: - pos_inds = pos_inds.squeeze(1) - if pos_inds.numel() <= num_expected: - return pos_inds - else: - return self.random_choice(pos_inds, num_expected) - - def _sample_neg(self, assign_result, num_expected, **kwargs): - """Randomly sample some negative samples.""" - neg_inds = torch.nonzero(assign_result.gt_inds == 0, as_tuple=False) - if neg_inds.numel() != 0: - neg_inds = neg_inds.squeeze(1) - if len(neg_inds) <= num_expected: - return neg_inds - else: - return self.random_choice(neg_inds, num_expected) diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/datasets/cityscapes.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/datasets/cityscapes.py deleted file mode 100644 index 71eead87e7f4e511c0cb59e69c3a599832ada0e4..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/datasets/cityscapes.py +++ /dev/null @@ -1,334 +0,0 @@ -# Modified from https://github.com/facebookresearch/detectron2/blob/master/detectron2/data/datasets/cityscapes.py # noqa -# and https://github.com/mcordts/cityscapesScripts/blob/master/cityscapesscripts/evaluation/evalInstanceLevelSemanticLabeling.py # noqa - -import glob -import os -import os.path as osp -import tempfile -from collections import OrderedDict - -import mmcv -import numpy as np -import pycocotools.mask as maskUtils -from mmcv.utils import print_log - -from .builder import DATASETS -from .coco import CocoDataset - - -@DATASETS.register_module() -class CityscapesDataset(CocoDataset): - - CLASSES = ('person', 'rider', 'car', 'truck', 'bus', 'train', 'motorcycle', - 'bicycle') - - def _filter_imgs(self, min_size=32): - """Filter images too small or without ground truths.""" - valid_inds = [] - # obtain images that contain annotation - ids_with_ann = set(_['image_id'] for _ in self.coco.anns.values()) - # obtain images that contain annotations of the required categories - ids_in_cat = set() - for i, class_id in enumerate(self.cat_ids): - ids_in_cat |= set(self.coco.cat_img_map[class_id]) - # merge the image id sets of the two conditions and use the merged set - # to filter out images if self.filter_empty_gt=True - ids_in_cat &= ids_with_ann - - valid_img_ids = [] - for i, img_info in enumerate(self.data_infos): - img_id = img_info['id'] - ann_ids = self.coco.getAnnIds(imgIds=[img_id]) - ann_info = self.coco.loadAnns(ann_ids) - all_iscrowd = all([_['iscrowd'] for _ in ann_info]) - if self.filter_empty_gt and (self.img_ids[i] not in ids_in_cat - or all_iscrowd): - continue - if min(img_info['width'], img_info['height']) >= min_size: - valid_inds.append(i) - valid_img_ids.append(img_id) - self.img_ids = valid_img_ids - return valid_inds - - def _parse_ann_info(self, img_info, ann_info): - """Parse bbox and mask annotation. - - Args: - img_info (dict): Image info of an image. - ann_info (list[dict]): Annotation info of an image. - - Returns: - dict: A dict containing the following keys: bboxes, \ - bboxes_ignore, labels, masks, seg_map. \ - "masks" are already decoded into binary masks. - """ - gt_bboxes = [] - gt_labels = [] - gt_bboxes_ignore = [] - gt_masks_ann = [] - - for i, ann in enumerate(ann_info): - if ann.get('ignore', False): - continue - x1, y1, w, h = ann['bbox'] - if ann['area'] <= 0 or w < 1 or h < 1: - continue - if ann['category_id'] not in self.cat_ids: - continue - bbox = [x1, y1, x1 + w, y1 + h] - if ann.get('iscrowd', False): - gt_bboxes_ignore.append(bbox) - else: - gt_bboxes.append(bbox) - gt_labels.append(self.cat2label[ann['category_id']]) - gt_masks_ann.append(ann['segmentation']) - - if gt_bboxes: - gt_bboxes = np.array(gt_bboxes, dtype=np.float32) - gt_labels = np.array(gt_labels, dtype=np.int64) - else: - gt_bboxes = np.zeros((0, 4), dtype=np.float32) - gt_labels = np.array([], dtype=np.int64) - - if gt_bboxes_ignore: - gt_bboxes_ignore = np.array(gt_bboxes_ignore, dtype=np.float32) - else: - gt_bboxes_ignore = np.zeros((0, 4), dtype=np.float32) - - ann = dict( - bboxes=gt_bboxes, - labels=gt_labels, - bboxes_ignore=gt_bboxes_ignore, - masks=gt_masks_ann, - seg_map=img_info['segm_file']) - - return ann - - def results2txt(self, results, outfile_prefix): - """Dump the detection results to a txt file. - - Args: - results (list[list | tuple]): Testing results of the - dataset. - outfile_prefix (str): The filename prefix of the json files. - If the prefix is "somepath/xxx", - the txt files will be named "somepath/xxx.txt". - - Returns: - list[str]: Result txt files which contains corresponding \ - instance segmentation images. - """ - try: - import cityscapesscripts.helpers.labels as CSLabels - except ImportError: - raise ImportError('Please run "pip install citscapesscripts" to ' - 'install cityscapesscripts first.') - result_files = [] - os.makedirs(outfile_prefix, exist_ok=True) - prog_bar = mmcv.ProgressBar(len(self)) - for idx in range(len(self)): - result = results[idx] - filename = self.data_infos[idx]['filename'] - basename = osp.splitext(osp.basename(filename))[0] - pred_txt = osp.join(outfile_prefix, basename + '_pred.txt') - - bbox_result, segm_result = result - bboxes = np.vstack(bbox_result) - # segm results - if isinstance(segm_result, tuple): - # Some detectors use different scores for bbox and mask, - # like Mask Scoring R-CNN. Score of segm will be used instead - # of bbox score. - segms = mmcv.concat_list(segm_result[0]) - mask_score = segm_result[1] - else: - # use bbox score for mask score - segms = mmcv.concat_list(segm_result) - mask_score = [bbox[-1] for bbox in bboxes] - labels = [ - np.full(bbox.shape[0], i, dtype=np.int32) - for i, bbox in enumerate(bbox_result) - ] - labels = np.concatenate(labels) - - assert len(bboxes) == len(segms) == len(labels) - num_instances = len(bboxes) - prog_bar.update() - with open(pred_txt, 'w') as fout: - for i in range(num_instances): - pred_class = labels[i] - classes = self.CLASSES[pred_class] - class_id = CSLabels.name2label[classes].id - score = mask_score[i] - mask = maskUtils.decode(segms[i]).astype(np.uint8) - png_filename = osp.join(outfile_prefix, - basename + f'_{i}_{classes}.png') - mmcv.imwrite(mask, png_filename) - fout.write(f'{osp.basename(png_filename)} {class_id} ' - f'{score}\n') - result_files.append(pred_txt) - - return result_files - - def format_results(self, results, txtfile_prefix=None): - """Format the results to txt (standard format for Cityscapes - evaluation). - - Args: - results (list): Testing results of the dataset. - txtfile_prefix (str | None): The prefix of txt files. It includes - the file path and the prefix of filename, e.g., "a/b/prefix". - If not specified, a temp file will be created. Default: None. - - Returns: - tuple: (result_files, tmp_dir), result_files is a dict containing \ - the json filepaths, tmp_dir is the temporal directory created \ - for saving txt/png files when txtfile_prefix is not specified. - """ - assert isinstance(results, list), 'results must be a list' - assert len(results) == len(self), ( - 'The length of results is not equal to the dataset len: {} != {}'. - format(len(results), len(self))) - - assert isinstance(results, list), 'results must be a list' - assert len(results) == len(self), ( - 'The length of results is not equal to the dataset len: {} != {}'. - format(len(results), len(self))) - - if txtfile_prefix is None: - tmp_dir = tempfile.TemporaryDirectory() - txtfile_prefix = osp.join(tmp_dir.name, 'results') - else: - tmp_dir = None - result_files = self.results2txt(results, txtfile_prefix) - - return result_files, tmp_dir - - def evaluate(self, - results, - metric='bbox', - logger=None, - outfile_prefix=None, - classwise=False, - proposal_nums=(100, 300, 1000), - iou_thrs=np.arange(0.5, 0.96, 0.05)): - """Evaluation in Cityscapes/COCO protocol. - - Args: - results (list[list | tuple]): Testing results of the dataset. - metric (str | list[str]): Metrics to be evaluated. Options are - 'bbox', 'segm', 'proposal', 'proposal_fast'. - logger (logging.Logger | str | None): Logger used for printing - related information during evaluation. Default: None. - outfile_prefix (str | None): The prefix of output file. It includes - the file path and the prefix of filename, e.g., "a/b/prefix". - If results are evaluated with COCO protocol, it would be the - prefix of output json file. For example, the metric is 'bbox' - and 'segm', then json files would be "a/b/prefix.bbox.json" and - "a/b/prefix.segm.json". - If results are evaluated with cityscapes protocol, it would be - the prefix of output txt/png files. The output files would be - png images under folder "a/b/prefix/xxx/" and the file name of - images would be written into a txt file - "a/b/prefix/xxx_pred.txt", where "xxx" is the video name of - cityscapes. If not specified, a temp file will be created. - Default: None. - classwise (bool): Whether to evaluating the AP for each class. - proposal_nums (Sequence[int]): Proposal number used for evaluating - recalls, such as recall@100, recall@1000. - Default: (100, 300, 1000). - iou_thrs (Sequence[float]): IoU threshold used for evaluating - recalls. If set to a list, the average recall of all IoUs will - also be computed. Default: 0.5. - - Returns: - dict[str, float]: COCO style evaluation metric or cityscapes mAP \ - and AP@50. - """ - eval_results = dict() - - metrics = metric.copy() if isinstance(metric, list) else [metric] - - if 'cityscapes' in metrics: - eval_results.update( - self._evaluate_cityscapes(results, outfile_prefix, logger)) - metrics.remove('cityscapes') - - # left metrics are all coco metric - if len(metrics) > 0: - # create CocoDataset with CityscapesDataset annotation - self_coco = CocoDataset(self.ann_file, self.pipeline.transforms, - None, self.data_root, self.img_prefix, - self.seg_prefix, self.proposal_file, - self.test_mode, self.filter_empty_gt) - # TODO: remove this in the future - # reload annotations of correct class - self_coco.CLASSES = self.CLASSES - self_coco.data_infos = self_coco.load_annotations(self.ann_file) - eval_results.update( - self_coco.evaluate(results, metrics, logger, outfile_prefix, - classwise, proposal_nums, iou_thrs)) - - return eval_results - - def _evaluate_cityscapes(self, results, txtfile_prefix, logger): - """Evaluation in Cityscapes protocol. - - Args: - results (list): Testing results of the dataset. - txtfile_prefix (str | None): The prefix of output txt file - logger (logging.Logger | str | None): Logger used for printing - related information during evaluation. Default: None. - - Returns: - dict[str: float]: Cityscapes evaluation results, contains 'mAP' \ - and 'AP@50'. - """ - - try: - import cityscapesscripts.evaluation.evalInstanceLevelSemanticLabeling as CSEval # noqa - except ImportError: - raise ImportError('Please run "pip install citscapesscripts" to ' - 'install cityscapesscripts first.') - msg = 'Evaluating in Cityscapes style' - if logger is None: - msg = '\n' + msg - print_log(msg, logger=logger) - - result_files, tmp_dir = self.format_results(results, txtfile_prefix) - - if tmp_dir is None: - result_dir = osp.join(txtfile_prefix, 'results') - else: - result_dir = osp.join(tmp_dir.name, 'results') - - eval_results = OrderedDict() - print_log(f'Evaluating results under {result_dir} ...', logger=logger) - - # set global states in cityscapes evaluation API - CSEval.args.cityscapesPath = os.path.join(self.img_prefix, '../..') - CSEval.args.predictionPath = os.path.abspath(result_dir) - CSEval.args.predictionWalk = None - CSEval.args.JSONOutput = False - CSEval.args.colorized = False - CSEval.args.gtInstancesFile = os.path.join(result_dir, - 'gtInstances.json') - CSEval.args.groundTruthSearch = os.path.join( - self.img_prefix.replace('leftImg8bit', 'gtFine'), - '*/*_gtFine_instanceIds.png') - - groundTruthImgList = glob.glob(CSEval.args.groundTruthSearch) - assert len(groundTruthImgList), 'Cannot find ground truth images' \ - f' in {CSEval.args.groundTruthSearch}.' - predictionImgList = [] - for gt in groundTruthImgList: - predictionImgList.append(CSEval.getPrediction(gt, CSEval.args)) - CSEval_results = CSEval.evaluateImgLists(predictionImgList, - groundTruthImgList, - CSEval.args)['averages'] - - eval_results['mAP'] = CSEval_results['allAp'] - eval_results['AP@50'] = CSEval_results['allAp50%'] - if tmp_dir is not None: - tmp_dir.cleanup() - return eval_results diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/cnn/vgg.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/cnn/vgg.py deleted file mode 100644 index 8778b649561a45a9652b1a15a26c2d171e58f3e1..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/cnn/vgg.py +++ /dev/null @@ -1,175 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import logging - -import torch.nn as nn - -from .utils import constant_init, kaiming_init, normal_init - - -def conv3x3(in_planes, out_planes, dilation=1): - """3x3 convolution with padding.""" - return nn.Conv2d( - in_planes, - out_planes, - kernel_size=3, - padding=dilation, - dilation=dilation) - - -def make_vgg_layer(inplanes, - planes, - num_blocks, - dilation=1, - with_bn=False, - ceil_mode=False): - layers = [] - for _ in range(num_blocks): - layers.append(conv3x3(inplanes, planes, dilation)) - if with_bn: - layers.append(nn.BatchNorm2d(planes)) - layers.append(nn.ReLU(inplace=True)) - inplanes = planes - layers.append(nn.MaxPool2d(kernel_size=2, stride=2, ceil_mode=ceil_mode)) - - return layers - - -class VGG(nn.Module): - """VGG backbone. - - Args: - depth (int): Depth of vgg, from {11, 13, 16, 19}. - with_bn (bool): Use BatchNorm or not. - num_classes (int): number of classes for classification. - num_stages (int): VGG stages, normally 5. - dilations (Sequence[int]): Dilation of each stage. - out_indices (Sequence[int]): Output from which stages. - frozen_stages (int): Stages to be frozen (all param fixed). -1 means - not freezing any parameters. - bn_eval (bool): Whether to set BN layers as eval mode, namely, freeze - running stats (mean and var). - bn_frozen (bool): Whether to freeze weight and bias of BN layers. - """ - - arch_settings = { - 11: (1, 1, 2, 2, 2), - 13: (2, 2, 2, 2, 2), - 16: (2, 2, 3, 3, 3), - 19: (2, 2, 4, 4, 4) - } - - def __init__(self, - depth, - with_bn=False, - num_classes=-1, - num_stages=5, - dilations=(1, 1, 1, 1, 1), - out_indices=(0, 1, 2, 3, 4), - frozen_stages=-1, - bn_eval=True, - bn_frozen=False, - ceil_mode=False, - with_last_pool=True): - super(VGG, self).__init__() - if depth not in self.arch_settings: - raise KeyError(f'invalid depth {depth} for vgg') - assert num_stages >= 1 and num_stages <= 5 - stage_blocks = self.arch_settings[depth] - self.stage_blocks = stage_blocks[:num_stages] - assert len(dilations) == num_stages - assert max(out_indices) <= num_stages - - self.num_classes = num_classes - self.out_indices = out_indices - self.frozen_stages = frozen_stages - self.bn_eval = bn_eval - self.bn_frozen = bn_frozen - - self.inplanes = 3 - start_idx = 0 - vgg_layers = [] - self.range_sub_modules = [] - for i, num_blocks in enumerate(self.stage_blocks): - num_modules = num_blocks * (2 + with_bn) + 1 - end_idx = start_idx + num_modules - dilation = dilations[i] - planes = 64 * 2**i if i < 4 else 512 - vgg_layer = make_vgg_layer( - self.inplanes, - planes, - num_blocks, - dilation=dilation, - with_bn=with_bn, - ceil_mode=ceil_mode) - vgg_layers.extend(vgg_layer) - self.inplanes = planes - self.range_sub_modules.append([start_idx, end_idx]) - start_idx = end_idx - if not with_last_pool: - vgg_layers.pop(-1) - self.range_sub_modules[-1][1] -= 1 - self.module_name = 'features' - self.add_module(self.module_name, nn.Sequential(*vgg_layers)) - - if self.num_classes > 0: - self.classifier = nn.Sequential( - nn.Linear(512 * 7 * 7, 4096), - nn.ReLU(True), - nn.Dropout(), - nn.Linear(4096, 4096), - nn.ReLU(True), - nn.Dropout(), - nn.Linear(4096, num_classes), - ) - - def init_weights(self, pretrained=None): - if isinstance(pretrained, str): - logger = logging.getLogger() - from ..runner import load_checkpoint - load_checkpoint(self, pretrained, strict=False, logger=logger) - elif pretrained is None: - for m in self.modules(): - if isinstance(m, nn.Conv2d): - kaiming_init(m) - elif isinstance(m, nn.BatchNorm2d): - constant_init(m, 1) - elif isinstance(m, nn.Linear): - normal_init(m, std=0.01) - else: - raise TypeError('pretrained must be a str or None') - - def forward(self, x): - outs = [] - vgg_layers = getattr(self, self.module_name) - for i in range(len(self.stage_blocks)): - for j in range(*self.range_sub_modules[i]): - vgg_layer = vgg_layers[j] - x = vgg_layer(x) - if i in self.out_indices: - outs.append(x) - if self.num_classes > 0: - x = x.view(x.size(0), -1) - x = self.classifier(x) - outs.append(x) - if len(outs) == 1: - return outs[0] - else: - return tuple(outs) - - def train(self, mode=True): - super(VGG, self).train(mode) - if self.bn_eval: - for m in self.modules(): - if isinstance(m, nn.BatchNorm2d): - m.eval() - if self.bn_frozen: - for params in m.parameters(): - params.requires_grad = False - vgg_layers = getattr(self, self.module_name) - if mode and self.frozen_stages >= 0: - for i in range(self.frozen_stages): - for j in range(*self.range_sub_modules[i]): - mod = vgg_layers[j] - mod.eval() - for param in mod.parameters(): - param.requires_grad = False diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/ops/merge_cells.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/ops/merge_cells.py deleted file mode 100644 index 48ca8cc0a8aca8432835bd760c0403a3c35b34cf..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/ops/merge_cells.py +++ /dev/null @@ -1,149 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from abc import abstractmethod - -import torch -import torch.nn as nn -import torch.nn.functional as F - -from ..cnn import ConvModule - - -class BaseMergeCell(nn.Module): - """The basic class for cells used in NAS-FPN and NAS-FCOS. - - BaseMergeCell takes 2 inputs. After applying convolution - on them, they are resized to the target size. Then, - they go through binary_op, which depends on the type of cell. - If with_out_conv is True, the result of output will go through - another convolution layer. - - Args: - in_channels (int): number of input channels in out_conv layer. - out_channels (int): number of output channels in out_conv layer. - with_out_conv (bool): Whether to use out_conv layer - out_conv_cfg (dict): Config dict for convolution layer, which should - contain "groups", "kernel_size", "padding", "bias" to build - out_conv layer. - out_norm_cfg (dict): Config dict for normalization layer in out_conv. - out_conv_order (tuple): The order of conv/norm/activation layers in - out_conv. - with_input1_conv (bool): Whether to use convolution on input1. - with_input2_conv (bool): Whether to use convolution on input2. - input_conv_cfg (dict): Config dict for building input1_conv layer and - input2_conv layer, which is expected to contain the type of - convolution. - Default: None, which means using conv2d. - input_norm_cfg (dict): Config dict for normalization layer in - input1_conv and input2_conv layer. Default: None. - upsample_mode (str): Interpolation method used to resize the output - of input1_conv and input2_conv to target size. Currently, we - support ['nearest', 'bilinear']. Default: 'nearest'. - """ - - def __init__(self, - fused_channels=256, - out_channels=256, - with_out_conv=True, - out_conv_cfg=dict( - groups=1, kernel_size=3, padding=1, bias=True), - out_norm_cfg=None, - out_conv_order=('act', 'conv', 'norm'), - with_input1_conv=False, - with_input2_conv=False, - input_conv_cfg=None, - input_norm_cfg=None, - upsample_mode='nearest'): - super(BaseMergeCell, self).__init__() - assert upsample_mode in ['nearest', 'bilinear'] - self.with_out_conv = with_out_conv - self.with_input1_conv = with_input1_conv - self.with_input2_conv = with_input2_conv - self.upsample_mode = upsample_mode - - if self.with_out_conv: - self.out_conv = ConvModule( - fused_channels, - out_channels, - **out_conv_cfg, - norm_cfg=out_norm_cfg, - order=out_conv_order) - - self.input1_conv = self._build_input_conv( - out_channels, input_conv_cfg, - input_norm_cfg) if with_input1_conv else nn.Sequential() - self.input2_conv = self._build_input_conv( - out_channels, input_conv_cfg, - input_norm_cfg) if with_input2_conv else nn.Sequential() - - def _build_input_conv(self, channel, conv_cfg, norm_cfg): - return ConvModule( - channel, - channel, - 3, - padding=1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - bias=True) - - @abstractmethod - def _binary_op(self, x1, x2): - pass - - def _resize(self, x, size): - if x.shape[-2:] == size: - return x - elif x.shape[-2:] < size: - return F.interpolate(x, size=size, mode=self.upsample_mode) - else: - assert x.shape[-2] % size[-2] == 0 and x.shape[-1] % size[-1] == 0 - kernel_size = x.shape[-1] // size[-1] - x = F.max_pool2d(x, kernel_size=kernel_size, stride=kernel_size) - return x - - def forward(self, x1, x2, out_size=None): - assert x1.shape[:2] == x2.shape[:2] - assert out_size is None or len(out_size) == 2 - if out_size is None: # resize to larger one - out_size = max(x1.size()[2:], x2.size()[2:]) - - x1 = self.input1_conv(x1) - x2 = self.input2_conv(x2) - - x1 = self._resize(x1, out_size) - x2 = self._resize(x2, out_size) - - x = self._binary_op(x1, x2) - if self.with_out_conv: - x = self.out_conv(x) - return x - - -class SumCell(BaseMergeCell): - - def __init__(self, in_channels, out_channels, **kwargs): - super(SumCell, self).__init__(in_channels, out_channels, **kwargs) - - def _binary_op(self, x1, x2): - return x1 + x2 - - -class ConcatCell(BaseMergeCell): - - def __init__(self, in_channels, out_channels, **kwargs): - super(ConcatCell, self).__init__(in_channels * 2, out_channels, - **kwargs) - - def _binary_op(self, x1, x2): - ret = torch.cat([x1, x2], dim=1) - return ret - - -class GlobalPoolingCell(BaseMergeCell): - - def __init__(self, in_channels=None, out_channels=None, **kwargs): - super().__init__(in_channels, out_channels, **kwargs) - self.global_pool = nn.AdaptiveAvgPool2d((1, 1)) - - def _binary_op(self, x1, x2): - x2_att = self.global_pool(x2).sigmoid() - return x2 + x2_att * x1 diff --git a/spaces/Ryandhikaw/rvc-hololive/infer_pack/modules.py b/spaces/Ryandhikaw/rvc-hololive/infer_pack/modules.py deleted file mode 100644 index 960481cedad9a6106f2bf0b9e86e82b120f7b33f..0000000000000000000000000000000000000000 --- a/spaces/Ryandhikaw/rvc-hololive/infer_pack/modules.py +++ /dev/null @@ -1,522 +0,0 @@ -import copy -import math -import numpy as np -import scipy -import torch -from torch import nn -from torch.nn import functional as F - -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm - -from infer_pack import commons -from infer_pack.commons import init_weights, get_padding -from infer_pack.transforms import piecewise_rational_quadratic_transform - - -LRELU_SLOPE = 0.1 - - -class LayerNorm(nn.Module): - def __init__(self, channels, eps=1e-5): - super().__init__() - self.channels = channels - self.eps = eps - - self.gamma = nn.Parameter(torch.ones(channels)) - self.beta = nn.Parameter(torch.zeros(channels)) - - def forward(self, x): - x = x.transpose(1, -1) - x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps) - return x.transpose(1, -1) - - -class ConvReluNorm(nn.Module): - def __init__( - self, - in_channels, - hidden_channels, - out_channels, - kernel_size, - n_layers, - p_dropout, - ): - super().__init__() - self.in_channels = in_channels - self.hidden_channels = hidden_channels - self.out_channels = out_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - assert n_layers > 1, "Number of layers should be larger than 0." - - self.conv_layers = nn.ModuleList() - self.norm_layers = nn.ModuleList() - self.conv_layers.append( - nn.Conv1d( - in_channels, hidden_channels, kernel_size, padding=kernel_size // 2 - ) - ) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.relu_drop = nn.Sequential(nn.ReLU(), nn.Dropout(p_dropout)) - for _ in range(n_layers - 1): - self.conv_layers.append( - nn.Conv1d( - hidden_channels, - hidden_channels, - kernel_size, - padding=kernel_size // 2, - ) - ) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.proj = nn.Conv1d(hidden_channels, out_channels, 1) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask): - x_org = x - for i in range(self.n_layers): - x = self.conv_layers[i](x * x_mask) - x = self.norm_layers[i](x) - x = self.relu_drop(x) - x = x_org + self.proj(x) - return x * x_mask - - -class DDSConv(nn.Module): - """ - Dialted and Depth-Separable Convolution - """ - - def __init__(self, channels, kernel_size, n_layers, p_dropout=0.0): - super().__init__() - self.channels = channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - - self.drop = nn.Dropout(p_dropout) - self.convs_sep = nn.ModuleList() - self.convs_1x1 = nn.ModuleList() - self.norms_1 = nn.ModuleList() - self.norms_2 = nn.ModuleList() - for i in range(n_layers): - dilation = kernel_size**i - padding = (kernel_size * dilation - dilation) // 2 - self.convs_sep.append( - nn.Conv1d( - channels, - channels, - kernel_size, - groups=channels, - dilation=dilation, - padding=padding, - ) - ) - self.convs_1x1.append(nn.Conv1d(channels, channels, 1)) - self.norms_1.append(LayerNorm(channels)) - self.norms_2.append(LayerNorm(channels)) - - def forward(self, x, x_mask, g=None): - if g is not None: - x = x + g - for i in range(self.n_layers): - y = self.convs_sep[i](x * x_mask) - y = self.norms_1[i](y) - y = F.gelu(y) - y = self.convs_1x1[i](y) - y = self.norms_2[i](y) - y = F.gelu(y) - y = self.drop(y) - x = x + y - return x * x_mask - - -class WN(torch.nn.Module): - def __init__( - self, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0, - p_dropout=0, - ): - super(WN, self).__init__() - assert kernel_size % 2 == 1 - self.hidden_channels = hidden_channels - self.kernel_size = (kernel_size,) - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - self.p_dropout = p_dropout - - self.in_layers = torch.nn.ModuleList() - self.res_skip_layers = torch.nn.ModuleList() - self.drop = nn.Dropout(p_dropout) - - if gin_channels != 0: - cond_layer = torch.nn.Conv1d( - gin_channels, 2 * hidden_channels * n_layers, 1 - ) - self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name="weight") - - for i in range(n_layers): - dilation = dilation_rate**i - padding = int((kernel_size * dilation - dilation) / 2) - in_layer = torch.nn.Conv1d( - hidden_channels, - 2 * hidden_channels, - kernel_size, - dilation=dilation, - padding=padding, - ) - in_layer = torch.nn.utils.weight_norm(in_layer, name="weight") - self.in_layers.append(in_layer) - - # last one is not necessary - if i < n_layers - 1: - res_skip_channels = 2 * hidden_channels - else: - res_skip_channels = hidden_channels - - res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1) - res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name="weight") - self.res_skip_layers.append(res_skip_layer) - - def forward(self, x, x_mask, g=None, **kwargs): - output = torch.zeros_like(x) - n_channels_tensor = torch.IntTensor([self.hidden_channels]) - - if g is not None: - g = self.cond_layer(g) - - for i in range(self.n_layers): - x_in = self.in_layers[i](x) - if g is not None: - cond_offset = i * 2 * self.hidden_channels - g_l = g[:, cond_offset : cond_offset + 2 * self.hidden_channels, :] - else: - g_l = torch.zeros_like(x_in) - - acts = commons.fused_add_tanh_sigmoid_multiply(x_in, g_l, n_channels_tensor) - acts = self.drop(acts) - - res_skip_acts = self.res_skip_layers[i](acts) - if i < self.n_layers - 1: - res_acts = res_skip_acts[:, : self.hidden_channels, :] - x = (x + res_acts) * x_mask - output = output + res_skip_acts[:, self.hidden_channels :, :] - else: - output = output + res_skip_acts - return output * x_mask - - def remove_weight_norm(self): - if self.gin_channels != 0: - torch.nn.utils.remove_weight_norm(self.cond_layer) - for l in self.in_layers: - torch.nn.utils.remove_weight_norm(l) - for l in self.res_skip_layers: - torch.nn.utils.remove_weight_norm(l) - - -class ResBlock1(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)): - super(ResBlock1, self).__init__() - self.convs1 = nn.ModuleList( - [ - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[2], - padding=get_padding(kernel_size, dilation[2]), - ) - ), - ] - ) - self.convs1.apply(init_weights) - - self.convs2 = nn.ModuleList( - [ - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=1, - padding=get_padding(kernel_size, 1), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=1, - padding=get_padding(kernel_size, 1), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=1, - padding=get_padding(kernel_size, 1), - ) - ), - ] - ) - self.convs2.apply(init_weights) - - def forward(self, x, x_mask=None): - for c1, c2 in zip(self.convs1, self.convs2): - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c1(xt) - xt = F.leaky_relu(xt, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c2(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs1: - remove_weight_norm(l) - for l in self.convs2: - remove_weight_norm(l) - - -class ResBlock2(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3)): - super(ResBlock2, self).__init__() - self.convs = nn.ModuleList( - [ - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]), - ) - ), - ] - ) - self.convs.apply(init_weights) - - def forward(self, x, x_mask=None): - for c in self.convs: - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs: - remove_weight_norm(l) - - -class Log(nn.Module): - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask - logdet = torch.sum(-y, [1, 2]) - return y, logdet - else: - x = torch.exp(x) * x_mask - return x - - -class Flip(nn.Module): - def forward(self, x, *args, reverse=False, **kwargs): - x = torch.flip(x, [1]) - if not reverse: - logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device) - return x, logdet - else: - return x - - -class ElementwiseAffine(nn.Module): - def __init__(self, channels): - super().__init__() - self.channels = channels - self.m = nn.Parameter(torch.zeros(channels, 1)) - self.logs = nn.Parameter(torch.zeros(channels, 1)) - - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = self.m + torch.exp(self.logs) * x - y = y * x_mask - logdet = torch.sum(self.logs * x_mask, [1, 2]) - return y, logdet - else: - x = (x - self.m) * torch.exp(-self.logs) * x_mask - return x - - -class ResidualCouplingLayer(nn.Module): - def __init__( - self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - p_dropout=0, - gin_channels=0, - mean_only=False, - ): - assert channels % 2 == 0, "channels should be divisible by 2" - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.half_channels = channels // 2 - self.mean_only = mean_only - - self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1) - self.enc = WN( - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - p_dropout=p_dropout, - gin_channels=gin_channels, - ) - self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1) - self.post.weight.data.zero_() - self.post.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels] * 2, 1) - h = self.pre(x0) * x_mask - h = self.enc(h, x_mask, g=g) - stats = self.post(h) * x_mask - if not self.mean_only: - m, logs = torch.split(stats, [self.half_channels] * 2, 1) - else: - m = stats - logs = torch.zeros_like(m) - - if not reverse: - x1 = m + x1 * torch.exp(logs) * x_mask - x = torch.cat([x0, x1], 1) - logdet = torch.sum(logs, [1, 2]) - return x, logdet - else: - x1 = (x1 - m) * torch.exp(-logs) * x_mask - x = torch.cat([x0, x1], 1) - return x - - def remove_weight_norm(self): - self.enc.remove_weight_norm() - - -class ConvFlow(nn.Module): - def __init__( - self, - in_channels, - filter_channels, - kernel_size, - n_layers, - num_bins=10, - tail_bound=5.0, - ): - super().__init__() - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.num_bins = num_bins - self.tail_bound = tail_bound - self.half_channels = in_channels // 2 - - self.pre = nn.Conv1d(self.half_channels, filter_channels, 1) - self.convs = DDSConv(filter_channels, kernel_size, n_layers, p_dropout=0.0) - self.proj = nn.Conv1d( - filter_channels, self.half_channels * (num_bins * 3 - 1), 1 - ) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels] * 2, 1) - h = self.pre(x0) - h = self.convs(h, x_mask, g=g) - h = self.proj(h) * x_mask - - b, c, t = x0.shape - h = h.reshape(b, c, -1, t).permute(0, 1, 3, 2) # [b, cx?, t] -> [b, c, t, ?] - - unnormalized_widths = h[..., : self.num_bins] / math.sqrt(self.filter_channels) - unnormalized_heights = h[..., self.num_bins : 2 * self.num_bins] / math.sqrt( - self.filter_channels - ) - unnormalized_derivatives = h[..., 2 * self.num_bins :] - - x1, logabsdet = piecewise_rational_quadratic_transform( - x1, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=reverse, - tails="linear", - tail_bound=self.tail_bound, - ) - - x = torch.cat([x0, x1], 1) * x_mask - logdet = torch.sum(logabsdet * x_mask, [1, 2]) - if not reverse: - return x, logdet - else: - return x diff --git a/spaces/SIH/Augmented-Retrieval-qa-ChatGPT/README.md b/spaces/SIH/Augmented-Retrieval-qa-ChatGPT/README.md deleted file mode 100644 index 9eaa2b3d82adf58854fcfc0e867412a1be7aabdb..0000000000000000000000000000000000000000 --- a/spaces/SIH/Augmented-Retrieval-qa-ChatGPT/README.md +++ /dev/null @@ -1,15 +0,0 @@ ---- -title: Augmented Retrieval Qa ChatGPT -emoji: 🚀 -colorFrom: blue -colorTo: blue -sdk: streamlit -sdk_version: 1.19.0 -app_file: streamlit_langchain_chat/streamlit_app.py -pinned: false -python_version: 3.10.4 -license: cc-by-nc-sa-4.0 -duplicated_from: hlydecker/Augmented-Retrieval-qa-ChatGPT ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/SIH/building-segmentation/app.py b/spaces/SIH/building-segmentation/app.py deleted file mode 100644 index 2d582f7a210b62d55468ef48aa28425caa430311..0000000000000000000000000000000000000000 --- a/spaces/SIH/building-segmentation/app.py +++ /dev/null @@ -1,69 +0,0 @@ -""" -building-segmentation -Proof of concept showing effectiveness of a fine tuned instance segmentation model for deteting buildings. -""" -import os -import cv2 -os.system("pip install 'git+https://github.com/facebookresearch/detectron2.git'") -from transformers import DetrFeatureExtractor, DetrForSegmentation -from PIL import Image -import gradio as gr -import numpy as np -import torch -import torchvision -import detectron2 - -# import some common detectron2 utilities -import itertools -import seaborn as sns -from detectron2 import model_zoo -from detectron2.engine import DefaultPredictor -from detectron2.config import get_cfg -from detectron2.utils.visualizer import Visualizer -from detectron2.utils.visualizer import ColorMode -from detectron2.data import MetadataCatalog, DatasetCatalog -from detectron2.checkpoint import DetectionCheckpointer - -cfg = get_cfg() -cfg.merge_from_file("model_weights/buildings_poc_cfg.yml") -cfg.MODEL.DEVICE='cpu' -cfg.MODEL.ROI_HEADS.SCORE_THRESH_TEST = 0.35 -cfg.MODEL.WEIGHTS = "model_weights/model_final.pth" -cfg.MODEL.ROI_HEADS.NUM_CLASSES = 8 -predictor = DefaultPredictor(cfg) - -def segment_buildings(im, confidence_threshold): - im = np.array(im) - outputs = predictor(im) - - instances = outputs["instances"].to("cpu") - scores = instances.scores - selected_indices = scores > confidence_threshold - selected_instances = instances[selected_indices] - - v = Visualizer(im[:, :, ::-1], - scale=0.5, - instance_mode=ColorMode.SEGMENTATION - ) - out = v.draw_instance_predictions(selected_instances) - - return Image.fromarray(out.get_image()[:, :, ::-1]) - -# gradio components - -gr_slider_confidence = gr.inputs.Slider(0,1,.1,.7, - label='Set confidence threshold % for masks') - -# gradio outputs -inputs = gr.inputs.Image(type="pil", label="Input Image") -outputs = gr.outputs.Image(type="pil", label="Output Image") - -title = "Building Segmentation" -description = "An instance segmentation demo for identifying boundaries of buildings in aerial images using DETR (End-to-End Object Detection) model with MaskRCNN-101 backbone" - -# Create user interface and launch -gr.Interface(segment_buildings, - inputs = [inputs, gr_slider_confidence], - outputs = outputs, - title = title, - description = description).launch(debug=True) \ No newline at end of file diff --git a/spaces/SalahZa/Tunisian-Speech-Recognition/train_with_wavlm.py b/spaces/SalahZa/Tunisian-Speech-Recognition/train_with_wavlm.py deleted file mode 100644 index 5d6ca4c5a378583fd297e1202522b9dc9c2368de..0000000000000000000000000000000000000000 --- a/spaces/SalahZa/Tunisian-Speech-Recognition/train_with_wavlm.py +++ /dev/null @@ -1,399 +0,0 @@ -#!/usr/bin/env python3 -import sys -import torch -import logging -import speechbrain as sb -from pathlib import Path -import os -import torchaudio -from hyperpyyaml import load_hyperpyyaml -from speechbrain.tokenizers.SentencePiece import SentencePiece -from speechbrain.utils.data_utils import undo_padding -from speechbrain.utils.distributed import run_on_main - -"""Recipe for training a sequence-to-sequence ASR system with CommonVoice. -The system employs a wav2vec2 encoder and a CTC decoder. -Decoding is performed with greedy decoding (will be extended to beam search). - -To run this recipe, do the following: -> python train_with_wav2vec2.py hparams/train_with_wav2vec2.yaml - -With the default hyperparameters, the system employs a pretrained wav2vec2 encoder. -The wav2vec2 model is pretrained following the model given in the hprams file. -It may be dependent on the language. - -The neural network is trained with CTC on sub-word units estimated with -Byte Pairwise Encoding (BPE). - -The experiment file is flexible enough to support a large variety of -different systems. By properly changing the parameter files, you can try -different encoders, decoders, tokens (e.g, characters instead of BPE), -training languages (all CommonVoice languages), and many -other possible variations. - -Authors - * Titouan Parcollet 2021 -""" - -logger = logging.getLogger(__name__) - - -# Define training procedure -class ASR(sb.core.Brain): - def compute_forward(self, batch, stage): - """Forward computations from the waveform batches to the output probabilities.""" - - batch = batch.to(self.device) - wavs, wav_lens = batch.sig - wavs, wav_lens = wavs.to(self.device), wav_lens.to(self.device) - if stage == sb.Stage.TRAIN: - if hasattr(self.hparams, "augmentation"): - wavs = self.hparams.augmentation(wavs, wav_lens) - - # Forward pass - feats = self.modules.wav2vec2(wavs, wav_lens) - x = self.modules.enc(feats) - logits = self.modules.ctc_lin(x) - p_ctc = self.hparams.log_softmax(logits) - - return p_ctc, wav_lens - - def compute_objectives(self, predictions, batch, stage): - """Computes the loss (CTC) given predictions and targets.""" - - p_ctc, wav_lens = predictions - - ids = batch.id - tokens, tokens_lens = batch.tokens - - loss = self.hparams.ctc_cost(p_ctc, tokens, wav_lens, tokens_lens) - - if stage != sb.Stage.TRAIN: - predicted_tokens = sb.decoders.ctc_greedy_decode( - p_ctc, wav_lens, blank_id=self.hparams.blank_index - ) - # Decode token terms to words - if self.hparams.use_language_modelling: - predicted_words = [] - for logs in p_ctc: - text = decoder.decode(logs.detach().cpu().numpy()) - predicted_words.append(text.split(" ")) - else: - predicted_words = [ - "".join(self.tokenizer.decode_ndim(utt_seq)).split(" ") - for utt_seq in predicted_tokens - ] - # Convert indices to words - target_words = [wrd.split(" ") for wrd in batch.wrd] - - self.wer_metric.append(ids, predicted_words, target_words) - self.cer_metric.append(ids, predicted_words, target_words) - - return loss - - def fit_batch(self, batch): - """Train the parameters given a single batch in input""" - should_step = self.step % self.grad_accumulation_factor == 0 - # Managing automatic mixed precision - # TOFIX: CTC fine-tuning currently is unstable - # This is certainly due to CTC being done in fp16 instead of fp32 - if self.auto_mix_prec: - with torch.cuda.amp.autocast(): - with self.no_sync(): - outputs = self.compute_forward(batch, sb.Stage.TRAIN) - loss = self.compute_objectives(outputs, batch, sb.Stage.TRAIN) - with self.no_sync(not should_step): - self.scaler.scale( - loss / self.grad_accumulation_factor - ).backward() - if should_step: - - if not self.hparams.wav2vec2.freeze: - self.scaler.unscale_(self.wav2vec_optimizer) - self.scaler.unscale_(self.model_optimizer) - if self.check_gradients(loss): - if not self.hparams.wav2vec2.freeze: - if self.optimizer_step >= self.hparams.warmup_steps: - self.scaler.step(self.wav2vec_optimizer) - self.scaler.step(self.model_optimizer) - self.scaler.update() - self.zero_grad() - self.optimizer_step += 1 - else: - # This is mandatory because HF models have a weird behavior with DDP - # on the forward pass - with self.no_sync(): - outputs = self.compute_forward(batch, sb.Stage.TRAIN) - - loss = self.compute_objectives(outputs, batch, sb.Stage.TRAIN) - - with self.no_sync(not should_step): - (loss / self.grad_accumulation_factor).backward() - if should_step: - if self.check_gradients(loss): - if not self.hparams.wav2vec2.freeze: - if self.optimizer_step >= self.hparams.warmup_steps: - self.wav2vec_optimizer.step() - self.model_optimizer.step() - self.zero_grad() - self.optimizer_step += 1 - - self.on_fit_batch_end(batch, outputs, loss, should_step) - return loss.detach().cpu() - - def evaluate_batch(self, batch, stage): - """Computations needed for validation/test batches""" - predictions = self.compute_forward(batch, stage=stage) - with torch.no_grad(): - loss = self.compute_objectives(predictions, batch, stage=stage) - return loss.detach() - - def on_stage_start(self, stage, epoch): - """Gets called at the beginning of each epoch""" - if stage != sb.Stage.TRAIN: - self.cer_metric = self.hparams.cer_computer() - self.wer_metric = self.hparams.error_rate_computer() - - def on_stage_end(self, stage, stage_loss, epoch): - """Gets called at the end of an epoch.""" - # Compute/store important stats - stage_stats = {"loss": stage_loss} - if stage == sb.Stage.TRAIN: - self.train_stats = stage_stats - else: - stage_stats["CER"] = self.cer_metric.summarize("error_rate") - stage_stats["WER"] = self.wer_metric.summarize("error_rate") - - # Perform end-of-iteration things, like annealing, logging, etc. - if stage == sb.Stage.VALID: - old_lr_model, new_lr_model = self.hparams.lr_annealing_model( - stage_stats["loss"] - ) - old_lr_wav2vec, new_lr_wav2vec = self.hparams.lr_annealing_wav2vec( - stage_stats["loss"] - ) - sb.nnet.schedulers.update_learning_rate( - self.model_optimizer, new_lr_model - ) - if not self.hparams.wav2vec2.freeze: - sb.nnet.schedulers.update_learning_rate( - self.wav2vec_optimizer, new_lr_wav2vec - ) - self.hparams.train_logger.log_stats( - stats_meta={ - "epoch": epoch, - "lr_model": old_lr_model, - "lr_wav2vec": old_lr_wav2vec, - }, - train_stats=self.train_stats, - valid_stats=stage_stats, - ) - self.checkpointer.save_and_keep_only( - meta={"WER": stage_stats["WER"]}, min_keys=["WER"], - ) - elif stage == sb.Stage.TEST: - self.hparams.train_logger.log_stats( - stats_meta={"Epoch loaded": self.hparams.epoch_counter.current}, - test_stats=stage_stats, - ) - with open(self.hparams.wer_file, "w") as w: - self.wer_metric.write_stats(w) - - def init_optimizers(self): - "Initializes the wav2vec2 optimizer and model optimizer" - - # If the wav2vec encoder is unfrozen, we create the optimizer - if not self.hparams.wav2vec2.freeze: - self.wav2vec_optimizer = self.hparams.wav2vec_opt_class( - self.modules.wav2vec2.parameters() - ) - if self.checkpointer is not None: - self.checkpointer.add_recoverable( - "wav2vec_opt", self.wav2vec_optimizer - ) - - self.model_optimizer = self.hparams.model_opt_class( - self.hparams.model.parameters() - ) - - if self.checkpointer is not None: - self.checkpointer.add_recoverable("modelopt", self.model_optimizer) - - def zero_grad(self, set_to_none=False): - if not self.hparams.wav2vec2.freeze: - self.wav2vec_optimizer.zero_grad(set_to_none) - self.model_optimizer.zero_grad(set_to_none) - - -# Define custom data procedure -def dataio_prepare(hparams): - """This function prepares the datasets to be used in the brain class. - It also defines the data processing pipeline through user-defined functions.""" - - # 1. Define datasets - data_folder = hparams["data_folder"] - - train_data = sb.dataio.dataset.DynamicItemDataset.from_csv( - csv_path=hparams["train_csv"], replacements={"data_root": data_folder}, - ) - - if hparams["sorting"] == "ascending": - # we sort training data to speed up training and get better results. - train_data = train_data.filtered_sorted( - sort_key="duration", - key_max_value={"duration": hparams["avoid_if_longer_than"]}, - ) - # when sorting do not shuffle in dataloader ! otherwise is pointless - hparams["dataloader_options"]["shuffle"] = False - - elif hparams["sorting"] == "descending": - train_data = train_data.filtered_sorted( - sort_key="duration", - reverse=True, - key_max_value={"duration": hparams["avoid_if_longer_than"]}, - ) - # when sorting do not shuffle in dataloader ! otherwise is pointless - hparams["dataloader_options"]["shuffle"] = False - - elif hparams["sorting"] == "random": - pass - - else: - raise NotImplementedError( - "sorting must be random, ascending or descending" - ) - - valid_data = sb.dataio.dataset.DynamicItemDataset.from_csv( - csv_path=hparams["valid_csv"], replacements={"data_root": data_folder}, - ) - # We also sort the validation data so it is faster to validate - valid_data = valid_data.filtered_sorted(sort_key="duration") - test_datasets = {} - for csv_file in hparams["test_csv"]: - name = Path(csv_file).stem - test_datasets[name] = sb.dataio.dataset.DynamicItemDataset.from_csv( - csv_path=csv_file, replacements={"data_root": data_folder} - ) - test_datasets[name] = test_datasets[name].filtered_sorted( - sort_key="duration" - ) - - datasets = [train_data, valid_data] + [i for k, i in test_datasets.items()] - - - # 2. Define audio pipeline: - @sb.utils.data_pipeline.takes("wav") - @sb.utils.data_pipeline.provides("sig") - def audio_pipeline(wav): - info = torchaudio.info(wav) - sig = sb.dataio.dataio.read_audio(wav) - resampled = torchaudio.transforms.Resample( - info.sample_rate, hparams["sample_rate"], - )(sig) - return resampled - - sb.dataio.dataset.add_dynamic_item(datasets, audio_pipeline) - label_encoder = sb.dataio.encoder.CTCTextEncoder() - - # 3. Define text pipeline: - @sb.utils.data_pipeline.takes("wrd") - @sb.utils.data_pipeline.provides( - "wrd", "char_list", "tokens_list", "tokens" - ) - def text_pipeline(wrd): - yield wrd - char_list = list(wrd) - yield char_list - tokens_list = label_encoder.encode_sequence(char_list) - yield tokens_list - tokens = torch.LongTensor(tokens_list) - yield tokens - - sb.dataio.dataset.add_dynamic_item(datasets, text_pipeline) - lab_enc_file = os.path.join(hparams["save_folder"], "label_encoder.txt") - special_labels = { - "blank_label": hparams["blank_index"], - "unk_label": hparams["unk_index"] - } - label_encoder.load_or_create( - path=lab_enc_file, - from_didatasets=[train_data], - output_key="char_list", - special_labels=special_labels, - sequence_input=True, - ) - - # 4. Set output: - sb.dataio.dataset.set_output_keys( - datasets, ["id", "sig", "wrd", "char_list", "tokens"], - ) - return train_data, valid_data,test_datasets, label_encoder - - -if __name__ == "__main__": - - # Load hyperparameters file with command-line overrides - hparams_file, run_opts, overrides = sb.parse_arguments(sys.argv[1:]) - with open(hparams_file) as fin: - hparams = load_hyperpyyaml(fin, overrides) - - # If --distributed_launch then - # create ddp_group with the right communication protocol - sb.utils.distributed.ddp_init_group(run_opts) - - - # Create experiment directory - sb.create_experiment_directory( - experiment_directory=hparams["output_folder"], - hyperparams_to_save=hparams_file, - overrides=overrides, - ) - - # Due to DDP, we do the preparation ONLY on the main python process - # Defining tokenizer and loading it - # Create the datasets objects as well as tokenization and encoding :-D - train_data, valid_data, test_datasets, label_encoder = dataio_prepare(hparams) - if hparams["use_language_modelling"]: - print("using langauge_modeeling") - from pyctcdecode import build_ctcdecoder - ind2lab = label_encoder.ind2lab - print(ind2lab) - labels = [ind2lab[x] for x in range(len(ind2lab))] - labels = [""] + labels[1:-1] + ["1"] - # Replace the token with a blank character, needed for PyCTCdecode - print(labels) - decoder = build_ctcdecoder( - labels, - kenlm_model_path=hparams["ngram_lm_path"], # .arpa or .bin - alpha=0.5, # Default by KenLM - beta=1.0, # Default by KenLM - ) - # Trainer initialization - asr_brain = ASR( - modules=hparams["modules"], - hparams=hparams, - run_opts=run_opts, - checkpointer=hparams["checkpointer"], - ) - - # Adding objects to trainer. - asr_brain.tokenizer = label_encoder - - # Training - asr_brain.fit( - asr_brain.hparams.epoch_counter, - train_data, - valid_data, - train_loader_kwargs=hparams["dataloader_options"], - valid_loader_kwargs=hparams["test_dataloader_options"], - ) - - # Test - for k in test_datasets.keys(): # keys are test_clean, test_other etc - asr_brain.hparams.wer_file = os.path.join( - hparams["output_folder"], "wer_{}.txt".format(k) - ) - asr_brain.evaluate( - test_datasets[k], test_loader_kwargs=hparams["test_dataloader_options"] - ) - diff --git a/spaces/Sapphire-356/Video2MC/common/generators.py b/spaces/Sapphire-356/Video2MC/common/generators.py deleted file mode 100644 index f41dfb77fecc4f09bb5a4778ab9b6c6657c48de7..0000000000000000000000000000000000000000 --- a/spaces/Sapphire-356/Video2MC/common/generators.py +++ /dev/null @@ -1,425 +0,0 @@ -# Copyright (c) 2018-present, Facebook, Inc. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. -# - -from itertools import zip_longest - -import numpy as np - - -class ChunkedGenerator: - """ - Batched data generator, used for training. - The sequences are split into equal-length chunks and padded as necessary. - - Arguments: - batch_size -- the batch size to use for training - cameras -- list of cameras, one element for each video (optional, used for semi-supervised training) - poses_3d -- list of ground-truth 3D poses, one element for each video (optional, used for supervised training) - poses_2d -- list of input 2D keypoints, one element for each video - chunk_length -- number of output frames to predict for each training example (usually 1) - pad -- 2D input padding to compensate for valid convolutions, per side (depends on the receptive field) - causal_shift -- asymmetric padding offset when causal convolutions are used (usually 0 or "pad") - shuffle -- randomly shuffle the dataset before each epoch - random_seed -- initial seed to use for the random generator - augment -- augment the dataset by flipping poses horizontally - kps_left and kps_right -- list of left/right 2D keypoints if flipping is enabled - joints_left and joints_right -- list of left/right 3D joints if flipping is enabled - """ - - def __init__(self, batch_size, cameras, poses_3d, poses_2d, - chunk_length, pad=0, causal_shift=0, - shuffle=True, random_seed=1234, - augment=False, kps_left=None, kps_right=None, joints_left=None, joints_right=None, - endless=False): - assert poses_3d is None or len(poses_3d) == len(poses_2d), (len(poses_3d), len(poses_2d)) - assert cameras is None or len(cameras) == len(poses_2d) - - # Build lineage info - pairs = [] # (seq_idx, start_frame, end_frame, flip) tuples - for i in range(len(poses_2d)): - assert poses_3d is None or poses_3d[i].shape[0] == poses_3d[i].shape[0] - n_chunks = (poses_2d[i].shape[0] + chunk_length - 1) // chunk_length - offset = (n_chunks * chunk_length - poses_2d[i].shape[0]) // 2 - bounds = np.arange(n_chunks + 1) * chunk_length - offset - augment_vector = np.full(len(bounds - 1), False, dtype=bool) - pairs += zip(np.repeat(i, len(bounds - 1)), bounds[:-1], bounds[1:], augment_vector) - if augment: - pairs += zip(np.repeat(i, len(bounds - 1)), bounds[:-1], bounds[1:], ~augment_vector) - - # Initialize buffers - if cameras is not None: - self.batch_cam = np.empty((batch_size, cameras[0].shape[-1])) - if poses_3d is not None: - self.batch_3d = np.empty((batch_size, chunk_length, poses_3d[0].shape[-2], poses_3d[0].shape[-1])) - self.batch_2d = np.empty((batch_size, chunk_length + 2 * pad, poses_2d[0].shape[-2], poses_2d[0].shape[-1])) - - self.num_batches = (len(pairs) + batch_size - 1) // batch_size - self.batch_size = batch_size - self.random = np.random.RandomState(random_seed) - self.pairs = pairs - self.shuffle = shuffle - self.pad = pad - self.causal_shift = causal_shift - self.endless = endless - self.state = None - - self.cameras = cameras - self.poses_3d = poses_3d - self.poses_2d = poses_2d - - self.augment = augment - self.kps_left = kps_left - self.kps_right = kps_right - self.joints_left = joints_left - self.joints_right = joints_right - - def num_frames(self): - return self.num_batches * self.batch_size - - def random_state(self): - return self.random - - def set_random_state(self, random): - self.random = random - - def augment_enabled(self): - return self.augment - - def next_pairs(self): - if self.state is None: - if self.shuffle: - pairs = self.random.permutation(self.pairs) - else: - pairs = self.pairs - return 0, pairs - else: - return self.state - - def next_epoch(self): - enabled = True - while enabled: - start_idx, pairs = self.next_pairs() - for b_i in range(start_idx, self.num_batches): - chunks = pairs[b_i * self.batch_size: (b_i + 1) * self.batch_size] - for i, (seq_i, start_3d, end_3d, flip) in enumerate(chunks): - start_2d = start_3d - self.pad - self.causal_shift - end_2d = end_3d + self.pad - self.causal_shift - - # 2D poses - seq_2d = self.poses_2d[seq_i] - low_2d = max(start_2d, 0) - high_2d = min(end_2d, seq_2d.shape[0]) - pad_left_2d = low_2d - start_2d - pad_right_2d = end_2d - high_2d - if pad_left_2d != 0 or pad_right_2d != 0: - self.batch_2d[i] = np.pad(seq_2d[low_2d:high_2d], ((pad_left_2d, pad_right_2d), (0, 0), (0, 0)), 'edge') - else: - self.batch_2d[i] = seq_2d[low_2d:high_2d] - - if flip: - # Flip 2D keypoints - self.batch_2d[i, :, :, 0] *= -1 - self.batch_2d[i, :, self.kps_left + self.kps_right] = self.batch_2d[i, :, self.kps_right + self.kps_left] - - # 3D poses - if self.poses_3d is not None: - seq_3d = self.poses_3d[seq_i] - low_3d = max(start_3d, 0) - high_3d = min(end_3d, seq_3d.shape[0]) - pad_left_3d = low_3d - start_3d - pad_right_3d = end_3d - high_3d - if pad_left_3d != 0 or pad_right_3d != 0: - self.batch_3d[i] = np.pad(seq_3d[low_3d:high_3d], ((pad_left_3d, pad_right_3d), (0, 0), (0, 0)), 'edge') - else: - self.batch_3d[i] = seq_3d[low_3d:high_3d] - - if flip: - # Flip 3D joints - self.batch_3d[i, :, :, 0] *= -1 - self.batch_3d[i, :, self.joints_left + self.joints_right] = \ - self.batch_3d[i, :, self.joints_right + self.joints_left] - - # Cameras - if self.cameras is not None: - self.batch_cam[i] = self.cameras[seq_i] - if flip: - # Flip horizontal distortion coefficients - self.batch_cam[i, 2] *= -1 - self.batch_cam[i, 7] *= -1 - - if self.endless: - self.state = (b_i + 1, pairs) - if self.poses_3d is None and self.cameras is None: - yield None, None, self.batch_2d[:len(chunks)] - elif self.poses_3d is not None and self.cameras is None: - yield None, self.batch_3d[:len(chunks)], self.batch_2d[:len(chunks)] - elif self.poses_3d is None: - yield self.batch_cam[:len(chunks)], None, self.batch_2d[:len(chunks)] - else: - yield self.batch_cam[:len(chunks)], self.batch_3d[:len(chunks)], self.batch_2d[:len(chunks)] - - if self.endless: - self.state = None - else: - enabled = False - - -class UnchunkedGenerator: - """ - Non-batched data generator, used for testing. - Sequences are returned one at a time (i.e. batch size = 1), without chunking. - - If data augmentation is enabled, the batches contain two sequences (i.e. batch size = 2), - the second of which is a mirrored version of the first. - - Arguments: - cameras -- list of cameras, one element for each video (optional, used for semi-supervised training) - poses_3d -- list of ground-truth 3D poses, one element for each video (optional, used for supervised training) - poses_2d -- list of input 2D keypoints, one element for each video - pad -- 2D input padding to compensate for valid convolutions, per side (depends on the receptive field) - causal_shift -- asymmetric padding offset when causal convolutions are used (usually 0 or "pad") - augment -- augment the dataset by flipping poses horizontally - kps_left and kps_right -- list of left/right 2D keypoints if flipping is enabled - joints_left and joints_right -- list of left/right 3D joints if flipping is enabled - """ - - def __init__(self, cameras, poses_3d, poses_2d, pad=0, causal_shift=0, - augment=False, kps_left=None, kps_right=None, joints_left=None, joints_right=None): - assert poses_3d is None or len(poses_3d) == len(poses_2d) - assert cameras is None or len(cameras) == len(poses_2d) - - self.augment = augment - self.kps_left = kps_left - self.kps_right = kps_right - self.joints_left = joints_left - self.joints_right = joints_right - - self.pad = pad - self.causal_shift = causal_shift - self.cameras = [] if cameras is None else cameras - self.poses_3d = [] if poses_3d is None else poses_3d - self.poses_2d = poses_2d - - def num_frames(self): - count = 0 - for p in self.poses_2d: - count += p.shape[0] - return count - - def augment_enabled(self): - return self.augment - - def set_augment(self, augment): - self.augment = augment - - def next_epoch(self): - for seq_cam, seq_3d, seq_2d in zip_longest(self.cameras, self.poses_3d, self.poses_2d): - batch_cam = None if seq_cam is None else np.expand_dims(seq_cam, axis=0) - batch_3d = None if seq_3d is None else np.expand_dims(seq_3d, axis=0) - # 2D input padding to compensate for valid convolutions, per side (depends on the receptive field) - batch_2d = np.expand_dims(np.pad(seq_2d, - ((self.pad + self.causal_shift, self.pad - self.causal_shift), (0, 0), (0, 0)), - 'edge'), axis=0) - if self.augment: - # Append flipped version - if batch_cam is not None: - batch_cam = np.concatenate((batch_cam, batch_cam), axis=0) - batch_cam[1, 2] *= -1 - batch_cam[1, 7] *= -1 - - if batch_3d is not None: - batch_3d = np.concatenate((batch_3d, batch_3d), axis=0) - batch_3d[1, :, :, 0] *= -1 - batch_3d[1, :, self.joints_left + self.joints_right] = batch_3d[1, :, self.joints_right + self.joints_left] - - batch_2d = np.concatenate((batch_2d, batch_2d), axis=0) - batch_2d[1, :, :, 0] *= -1 - batch_2d[1, :, self.kps_left + self.kps_right] = batch_2d[1, :, self.kps_right + self.kps_left] - - yield batch_cam, batch_3d, batch_2d - -class Evaluate_Generator: - """ - Batched data generator, used for training. - The sequences are split into equal-length chunks and padded as necessary. - Arguments: - batch_size -- the batch size to use for training - cameras -- list of cameras, one element for each video (optional, used for semi-supervised training) - poses_3d -- list of ground-truth 3D poses, one element for each video (optional, used for supervised training) - poses_2d -- list of input 2D keypoints, one element for each video - chunk_length -- number of output frames to predict for each training example (usually 1) - pad -- 2D input padding to compensate for valid convolutions, per side (depends on the receptive field) - causal_shift -- asymmetric padding offset when causal convolutions are used (usually 0 or "pad") - shuffle -- randomly shuffle the dataset before each epoch - random_seed -- initial seed to use for the random generator - augment -- augment the dataset by flipping poses horizontally - kps_left and kps_right -- list of left/right 2D keypoints if flipping is enabled - joints_left and joints_right -- list of left/right 3D joints if flipping is enabled - """ - - def __init__(self, batch_size, cameras, poses_3d, poses_2d, - chunk_length, pad=0, causal_shift=0, - shuffle=True, random_seed=1234, - augment=False, kps_left=None, kps_right=None, joints_left=None, joints_right=None, - endless=False): - assert poses_3d is None or len(poses_3d) == len(poses_2d), (len(poses_3d), len(poses_2d)) - assert cameras is None or len(cameras) == len(poses_2d) - - # Build lineage info - pairs = [] # (seq_idx, start_frame, end_frame, flip) tuples - for i in range(len(poses_2d)): - assert poses_3d is None or poses_3d[i].shape[0] == poses_3d[i].shape[0] - n_chunks = (poses_2d[i].shape[0] + chunk_length - 1) // chunk_length - offset = (n_chunks * chunk_length - poses_2d[i].shape[0]) // 2 - bounds = np.arange(n_chunks + 1) * chunk_length - offset - augment_vector = np.full(len(bounds - 1), False, dtype=bool) - pairs += zip(np.repeat(i, len(bounds - 1)), bounds[:-1], bounds[1:], augment_vector) - - # Initialize buffers - if cameras is not None: - self.batch_cam = np.empty((batch_size, cameras[0].shape[-1])) - if poses_3d is not None: - self.batch_3d = np.empty((batch_size, chunk_length, poses_3d[0].shape[-2], poses_3d[0].shape[-1])) - - if augment: - self.batch_2d_flip = np.empty( - (batch_size, chunk_length + 2 * pad, poses_2d[0].shape[-2], poses_2d[0].shape[-1])) - self.batch_2d = np.empty((batch_size, chunk_length + 2 * pad, poses_2d[0].shape[-2], poses_2d[0].shape[-1])) - else: - self.batch_2d = np.empty((batch_size, chunk_length + 2 * pad, poses_2d[0].shape[-2], poses_2d[0].shape[-1])) - - self.num_batches = (len(pairs) + batch_size - 1) // batch_size - self.batch_size = batch_size - self.random = np.random.RandomState(random_seed) - self.pairs = pairs - self.shuffle = shuffle - self.pad = pad - self.causal_shift = causal_shift - self.endless = endless - self.state = None - - self.cameras = cameras - self.poses_3d = poses_3d - self.poses_2d = poses_2d - - self.augment = augment - self.kps_left = kps_left - self.kps_right = kps_right - self.joints_left = joints_left - self.joints_right = joints_right - - def num_frames(self): - return self.num_batches * self.batch_size - - def random_state(self): - return self.random - - def set_random_state(self, random): - self.random = random - - def augment_enabled(self): - return self.augment - - def next_pairs(self): - if self.state is None: - if self.shuffle: - pairs = self.random.permutation(self.pairs) - else: - pairs = self.pairs - return 0, pairs - else: - return self.state - - def next_epoch(self): - enabled = True - while enabled: - start_idx, pairs = self.next_pairs() - for b_i in range(start_idx, self.num_batches): - chunks = pairs[b_i * self.batch_size: (b_i + 1) * self.batch_size] - for i, (seq_i, start_3d, end_3d, flip) in enumerate(chunks): - start_2d = start_3d - self.pad - self.causal_shift - end_2d = end_3d + self.pad - self.causal_shift - - # 2D poses - seq_2d = self.poses_2d[seq_i] - low_2d = max(start_2d, 0) - high_2d = min(end_2d, seq_2d.shape[0]) - pad_left_2d = low_2d - start_2d - pad_right_2d = end_2d - high_2d - if pad_left_2d != 0 or pad_right_2d != 0: - self.batch_2d[i] = np.pad(seq_2d[low_2d:high_2d], ((pad_left_2d, pad_right_2d), (0, 0), (0, 0)), - 'edge') - if self.augment: - self.batch_2d_flip[i] = np.pad(seq_2d[low_2d:high_2d], - ((pad_left_2d, pad_right_2d), (0, 0), (0, 0)), - 'edge') - - else: - self.batch_2d[i] = seq_2d[low_2d:high_2d] - if self.augment: - self.batch_2d_flip[i] = seq_2d[low_2d:high_2d] - - if self.augment: - self.batch_2d_flip[i, :, :, 0] *= -1 - self.batch_2d_flip[i, :, self.kps_left + self.kps_right] = self.batch_2d_flip[i, :, - self.kps_right + self.kps_left] - - # 3D poses - if self.poses_3d is not None: - seq_3d = self.poses_3d[seq_i] - low_3d = max(start_3d, 0) - high_3d = min(end_3d, seq_3d.shape[0]) - pad_left_3d = low_3d - start_3d - pad_right_3d = end_3d - high_3d - if pad_left_3d != 0 or pad_right_3d != 0: - self.batch_3d[i] = np.pad(seq_3d[low_3d:high_3d], - ((pad_left_3d, pad_right_3d), (0, 0), (0, 0)), 'edge') - else: - self.batch_3d[i] = seq_3d[low_3d:high_3d] - - if flip: - self.batch_3d[i, :, :, 0] *= -1 - self.batch_3d[i, :, self.joints_left + self.joints_right] = \ - self.batch_3d[i, :, self.joints_right + self.joints_left] - - # Cameras - if self.cameras is not None: - self.batch_cam[i] = self.cameras[seq_i] - if flip: - # Flip horizontal distortion coefficients - self.batch_cam[i, 2] *= -1 - self.batch_cam[i, 7] *= -1 - - if self.endless: - self.state = (b_i + 1, pairs) - - if self.augment: - if self.poses_3d is None and self.cameras is None: - yield None, None, self.batch_2d[:len(chunks)], self.batch_2d_flip[:len(chunks)] - elif self.poses_3d is not None and self.cameras is None: - yield None, self.batch_3d[:len(chunks)], self.batch_2d[:len(chunks)], self.batch_2d_flip[ - :len(chunks)] - elif self.poses_3d is None: - yield self.batch_cam[:len(chunks)], None, self.batch_2d[:len(chunks)], self.batch_2d_flip[ - :len(chunks)] - else: - yield self.batch_cam[:len(chunks)], self.batch_3d[:len(chunks)], self.batch_2d[:len( - chunks)], self.batch_2d_flip[:len(chunks)] - else: - if self.poses_3d is None and self.cameras is None: - yield None, None, self.batch_2d[:len(chunks)] - elif self.poses_3d is not None and self.cameras is None: - yield None, self.batch_3d[:len(chunks)], self.batch_2d[:len(chunks)] - elif self.poses_3d is None: - yield self.batch_cam[:len(chunks)], None, self.batch_2d[:len(chunks)] - else: - yield self.batch_cam[:len(chunks)], self.batch_3d[:len(chunks)], self.batch_2d[:len(chunks)] - - if self.endless: - self.state = None - else: - enabled = False \ No newline at end of file diff --git a/spaces/Saurabh46/MyChatGPT-DEMO/app.py b/spaces/Saurabh46/MyChatGPT-DEMO/app.py deleted file mode 100644 index c9fa37574ed265ee198e09643e8bcc10769450a9..0000000000000000000000000000000000000000 --- a/spaces/Saurabh46/MyChatGPT-DEMO/app.py +++ /dev/null @@ -1,40 +0,0 @@ -from llama_index import GPTVectorStoreIndex, SimpleDirectoryReader, LLMPredictor, ServiceContext, StorageContext, load_index_from_storage -from langchain import OpenAI -import gradio -import os - -os.environ["OPENAI_API_KEY"] = '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' - -def construct_index(directory_path): - num_outputs = 512 - - _llm_predictor = LLMPredictor(llm=OpenAI(temperature=0.7, model_name="gpt-3.5-turbo", max_tokens=num_outputs)) - - service_context = ServiceContext.from_defaults(llm_predictor=_llm_predictor) - - docs = SimpleDirectoryReader(directory_path).load_data() - - index = GPTVectorStoreIndex.from_documents(docs, service_context=service_context) - - index.storage_context.persist(persist_dir="indexes") - - return index - -def chatbot(input_text): - - storage_context = StorageContext.from_defaults(persist_dir="indexes") - - query_engne = load_index_from_storage(storage_context).as_query_engine() - - response = query_engne.query(input_text) - - return response.response - -iface = gradio.Interface(fn=chatbot, - inputs=gradio.inputs.Textbox(lines=4, label="Enter your question here"), - outputs=gradio.outputs.Textbox(label="Generated Text"), - title="My Custom trained AI Chatbot") - -index = construct_index("trainingData") - -iface.launch() diff --git a/spaces/ServerX/PorcoDiaz/infer/lib/uvr5_pack/utils.py b/spaces/ServerX/PorcoDiaz/infer/lib/uvr5_pack/utils.py deleted file mode 100644 index f4805cdb25e7c50611412a19340ad525d1251d7b..0000000000000000000000000000000000000000 --- a/spaces/ServerX/PorcoDiaz/infer/lib/uvr5_pack/utils.py +++ /dev/null @@ -1,121 +0,0 @@ -import json - -import numpy as np -import torch -from tqdm import tqdm - - -def load_data(file_name: str = "./infer/lib/uvr5_pack/name_params.json") -> dict: - with open(file_name, "r") as f: - data = json.load(f) - - return data - - -def make_padding(width, cropsize, offset): - left = offset - roi_size = cropsize - left * 2 - if roi_size == 0: - roi_size = cropsize - right = roi_size - (width % roi_size) + left - - return left, right, roi_size - - -def inference(X_spec, device, model, aggressiveness, data): - """ - data : dic configs - """ - - def _execute( - X_mag_pad, roi_size, n_window, device, model, aggressiveness, is_half=True - ): - model.eval() - with torch.no_grad(): - preds = [] - - iterations = [n_window] - - total_iterations = sum(iterations) - for i in tqdm(range(n_window)): - start = i * roi_size - X_mag_window = X_mag_pad[ - None, :, :, start : start + data["window_size"] - ] - X_mag_window = torch.from_numpy(X_mag_window) - if is_half: - X_mag_window = X_mag_window.half() - X_mag_window = X_mag_window.to(device) - - pred = model.predict(X_mag_window, aggressiveness) - - pred = pred.detach().cpu().numpy() - preds.append(pred[0]) - - pred = np.concatenate(preds, axis=2) - return pred - - def preprocess(X_spec): - X_mag = np.abs(X_spec) - X_phase = np.angle(X_spec) - - return X_mag, X_phase - - X_mag, X_phase = preprocess(X_spec) - - coef = X_mag.max() - X_mag_pre = X_mag / coef - - n_frame = X_mag_pre.shape[2] - pad_l, pad_r, roi_size = make_padding(n_frame, data["window_size"], model.offset) - n_window = int(np.ceil(n_frame / roi_size)) - - X_mag_pad = np.pad(X_mag_pre, ((0, 0), (0, 0), (pad_l, pad_r)), mode="constant") - - if list(model.state_dict().values())[0].dtype == torch.float16: - is_half = True - else: - is_half = False - pred = _execute( - X_mag_pad, roi_size, n_window, device, model, aggressiveness, is_half - ) - pred = pred[:, :, :n_frame] - - if data["tta"]: - pad_l += roi_size // 2 - pad_r += roi_size // 2 - n_window += 1 - - X_mag_pad = np.pad(X_mag_pre, ((0, 0), (0, 0), (pad_l, pad_r)), mode="constant") - - pred_tta = _execute( - X_mag_pad, roi_size, n_window, device, model, aggressiveness, is_half - ) - pred_tta = pred_tta[:, :, roi_size // 2 :] - pred_tta = pred_tta[:, :, :n_frame] - - return (pred + pred_tta) * 0.5 * coef, X_mag, np.exp(1.0j * X_phase) - else: - return pred * coef, X_mag, np.exp(1.0j * X_phase) - - -def _get_name_params(model_path, model_hash): - data = load_data() - flag = False - ModelName = model_path - for type in list(data): - for model in list(data[type][0]): - for i in range(len(data[type][0][model])): - if str(data[type][0][model][i]["hash_name"]) == model_hash: - flag = True - elif str(data[type][0][model][i]["hash_name"]) in ModelName: - flag = True - - if flag: - model_params_auto = data[type][0][model][i]["model_params"] - param_name_auto = data[type][0][model][i]["param_name"] - if type == "equivalent": - return param_name_auto, model_params_auto - else: - flag = False - return param_name_auto, model_params_auto diff --git a/spaces/Shriharsh/Text_To_Image/app.py b/spaces/Shriharsh/Text_To_Image/app.py deleted file mode 100644 index fab0665b3a2f5dcf84cf557f6a79b9286c2cfe25..0000000000000000000000000000000000000000 --- a/spaces/Shriharsh/Text_To_Image/app.py +++ /dev/null @@ -1,45 +0,0 @@ -import gradio as gr -from PIL import Image -from authtoken import auth_token -import torch -import torch.cuda.amp as amp -from diffusers import StableDiffusionPipeline - - - -model_id = "stabilityai/stable-diffusion-2-1" - -device = torch.device("cpu") # Default to CPU device -if torch.cuda.is_available(): - device = torch.device("cuda") - -# Use the DPMSolverMultistepScheduler (DPM-Solver++) scheduler here instead -pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16) -pipe.to(device) - -def generate(prompt): - with torch.no_grad(), amp.autocast(enabled=device != torch.device("cpu")): - image = pipe(prompt, guidance_scale=8.5)["sample"][0] - - image.save('generatedimage.png') - return image - -def predict_text(prompt): - image = generate(prompt) - return image - -def predict_image(input_image): - input_image.save('input_image.png') - prompt = input("Enter your prompt: ") - image = generate(prompt) - return image - -iface = gr.Interface( - fn=predict_text, - inputs="text", - outputs="image", - capture_session=True, -) -iface.launch() - - diff --git a/spaces/SoUmNerd/RemoteMojo/main.py b/spaces/SoUmNerd/RemoteMojo/main.py deleted file mode 100644 index 81eb19e7a07b08449c3f0d7e48fde2aa1fb78f8f..0000000000000000000000000000000000000000 --- a/spaces/SoUmNerd/RemoteMojo/main.py +++ /dev/null @@ -1,24 +0,0 @@ -from fastapi import FastAPI, Request, Response -from pydantic import BaseModel - -import subprocess -from regex import find_imports - -app = FastAPI() - -@app.post("/code") -async def run_mojo_code(request:Request) -> Response: - data = await request.json() - code = data["code"] - filename = data["filename"] - - try: - imports = find_imports(code) - for imported in imports: - subprocess.call(["python3", "-m", "pip", "install", imported], shell=True) - with open(filename, "w") as f: - f.write(code) - - return Response(content={"sucess":True, "output": subprocess.check_output(["mojo", filename]).decode("utf-8")}, status_code=200) - except: - return Response(content={"sucess":False}, status_code=500) \ No newline at end of file diff --git a/spaces/StarbucksCN/starbucks_doc/llama/utils.py b/spaces/StarbucksCN/starbucks_doc/llama/utils.py deleted file mode 100644 index ac335e5b03c1b96f4634181dbc226c560d48a3d1..0000000000000000000000000000000000000000 --- a/spaces/StarbucksCN/starbucks_doc/llama/utils.py +++ /dev/null @@ -1,5 +0,0 @@ -import os - - -def is_local_storage_files_ready(persist_dir: str) -> bool: - return os.path.exists(persist_dir) and len(os.listdir(persist_dir)) != 0 diff --git a/spaces/SujanMidatani/resume_details_extractor/README.md b/spaces/SujanMidatani/resume_details_extractor/README.md deleted file mode 100644 index bb989ca2d909d631e2edad2eea159fb4ce10b962..0000000000000000000000000000000000000000 --- a/spaces/SujanMidatani/resume_details_extractor/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Resume To Questions Generator -emoji: 🏃 -colorFrom: blue -colorTo: gray -sdk: gradio -sdk_version: 3.34.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/PIL/FontFile.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/PIL/FontFile.py deleted file mode 100644 index 5ec0a6632e3182382467688662ebc5e6c324da91..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/PIL/FontFile.py +++ /dev/null @@ -1,110 +0,0 @@ -# -# The Python Imaging Library -# $Id$ -# -# base class for raster font file parsers -# -# history: -# 1997-06-05 fl created -# 1997-08-19 fl restrict image width -# -# Copyright (c) 1997-1998 by Secret Labs AB -# Copyright (c) 1997-1998 by Fredrik Lundh -# -# See the README file for information on usage and redistribution. -# - - -import os - -from . import Image, _binary - -WIDTH = 800 - - -def puti16(fp, values): - """Write network order (big-endian) 16-bit sequence""" - for v in values: - if v < 0: - v += 65536 - fp.write(_binary.o16be(v)) - - -class FontFile: - """Base class for raster font file handlers.""" - - bitmap = None - - def __init__(self): - self.info = {} - self.glyph = [None] * 256 - - def __getitem__(self, ix): - return self.glyph[ix] - - def compile(self): - """Create metrics and bitmap""" - - if self.bitmap: - return - - # create bitmap large enough to hold all data - h = w = maxwidth = 0 - lines = 1 - for glyph in self: - if glyph: - d, dst, src, im = glyph - h = max(h, src[3] - src[1]) - w = w + (src[2] - src[0]) - if w > WIDTH: - lines += 1 - w = src[2] - src[0] - maxwidth = max(maxwidth, w) - - xsize = maxwidth - ysize = lines * h - - if xsize == 0 and ysize == 0: - return "" - - self.ysize = h - - # paste glyphs into bitmap - self.bitmap = Image.new("1", (xsize, ysize)) - self.metrics = [None] * 256 - x = y = 0 - for i in range(256): - glyph = self[i] - if glyph: - d, dst, src, im = glyph - xx = src[2] - src[0] - # yy = src[3] - src[1] - x0, y0 = x, y - x = x + xx - if x > WIDTH: - x, y = 0, y + h - x0, y0 = x, y - x = xx - s = src[0] + x0, src[1] + y0, src[2] + x0, src[3] + y0 - self.bitmap.paste(im.crop(src), s) - self.metrics[i] = d, dst, s - - def save(self, filename): - """Save font""" - - self.compile() - - # font data - self.bitmap.save(os.path.splitext(filename)[0] + ".pbm", "PNG") - - # font metrics - with open(os.path.splitext(filename)[0] + ".pil", "wb") as fp: - fp.write(b"PILfont\n") - fp.write(f";;;;;;{self.ysize};\n".encode("ascii")) # HACK!!! - fp.write(b"DATA\n") - for id in range(256): - m = self.metrics[id] - if not m: - puti16(fp, [0] * 10) - else: - puti16(fp, m[0] + m[1] + m[2]) diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/_pydev_bundle/pydev_imports.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/_pydev_bundle/pydev_imports.py deleted file mode 100644 index edc24290881a6255642a10ffe7baedc00d0823af..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/_pydev_bundle/pydev_imports.py +++ /dev/null @@ -1,13 +0,0 @@ -from _pydev_bundle._pydev_saved_modules import xmlrpclib -from _pydev_bundle._pydev_saved_modules import xmlrpcserver - -SimpleXMLRPCServer = xmlrpcserver.SimpleXMLRPCServer - -from _pydev_bundle._pydev_execfile import execfile - -from _pydev_bundle._pydev_saved_modules import _queue - -from _pydevd_bundle.pydevd_exec2 import Exec - -from urllib.parse import quote, quote_plus, unquote_plus # @UnresolvedImport - diff --git a/spaces/Sup3r/img-to-music/app.py b/spaces/Sup3r/img-to-music/app.py deleted file mode 100644 index 53ba74a6bbbf3c20f5df8f7b3cabc7c84bc63fdd..0000000000000000000000000000000000000000 --- a/spaces/Sup3r/img-to-music/app.py +++ /dev/null @@ -1,117 +0,0 @@ -import gradio as gr -import os -import requests -import urllib - -from os import path -from pydub import AudioSegment - -img_to_text = gr.Blocks.load(name="spaces/pharma/CLIP-Interrogator") -text_to_music = gr.Interface.load("spaces/fffiloni/text-2-music") - -from share_btn import community_icon_html, loading_icon_html, share_js - -def get_prompts(uploaded_image): - - prompt = img_to_text(uploaded_image, fn_index=1)[0] - - music_result = get_music(prompt) - - return music_result - -def get_music(prompt): - - result = text_to_music(prompt, fn_index=0) - - print(f"""————— - NEW RESULTS - prompt : {prompt} - music : {result} - ——————— - """) - - url = result - save_as = "file.mp3" - - data = urllib.request.urlopen(url) - - f = open(save_as,'wb') - f.write(data.read()) - f.close() - - wave_file="file.wav" - - sound = AudioSegment.from_mp3(save_as) - sound.export(wave_file, format="wav") - - return wave_file, gr.update(visible=True), gr.update(visible=True), gr.update(visible=True) - -css = """ -#col-container {max-width: 700px; margin-left: auto; margin-right: auto;} -a {text-decoration-line: underline; font-weight: 600;} -.animate-spin { - animation: spin 1s linear infinite; -} -@keyframes spin { - from { - transform: rotate(0deg); - } - to { - transform: rotate(360deg); - } -} -#share-btn-container { - display: flex; padding-left: 0.5rem !important; padding-right: 0.5rem !important; background-color: #000000; justify-content: center; align-items: center; border-radius: 9999px !important; width: 13rem; -} -#share-btn { - all: initial; color: #ffffff;font-weight: 600; cursor:pointer; font-family: 'IBM Plex Sans', sans-serif; margin-left: 0.5rem !important; padding-top: 0.25rem !important; padding-bottom: 0.25rem !important;right:0; -} -#share-btn * { - all: unset; -} -#share-btn-container div:nth-child(-n+2){ - width: auto !important; - min-height: 0px !important; -} -#share-btn-container .wrap { - display: none !important; -} -""" - -with gr.Blocks(css=css) as demo: - with gr.Column(elem_id="col-container"): - gr.HTML("""
-
-

- Image to Music -

-
-

- Sends an image in to CLIP Interrogator - to generate a text prompt which is then run through - Mubert text-to-music to generate music from the input image! -

-
""") - - - input_img = gr.Image(type="filepath", elem_id="input-img") - generate = gr.Button("Generate Music from Image") - - music_output = gr.Audio(label="Result", type="filepath", elem_id="music-output") - - with gr.Group(elem_id="share-btn-container"): - community_icon = gr.HTML(community_icon_html, visible=False) - loading_icon = gr.HTML(loading_icon_html, visible=False) - share_button = gr.Button("Share to community", elem_id="share-btn", visible=False) - - generate.click(get_prompts, inputs=[input_img], outputs=[music_output, share_button, community_icon, loading_icon], api_name="i2m") - share_button.click(None, [], [], _js=share_js) - -demo.queue(max_size=32, concurrency_count=20).launch() \ No newline at end of file diff --git a/spaces/Superlang/ImageProcessor/annotator/normalbae/models/submodules/efficientnet_repo/onnx_export.py b/spaces/Superlang/ImageProcessor/annotator/normalbae/models/submodules/efficientnet_repo/onnx_export.py deleted file mode 100644 index 7a5162ce214830df501bdb81edb66c095122f69d..0000000000000000000000000000000000000000 --- a/spaces/Superlang/ImageProcessor/annotator/normalbae/models/submodules/efficientnet_repo/onnx_export.py +++ /dev/null @@ -1,120 +0,0 @@ -""" ONNX export script - -Export PyTorch models as ONNX graphs. - -This export script originally started as an adaptation of code snippets found at -https://pytorch.org/tutorials/advanced/super_resolution_with_onnxruntime.html - -The default parameters work with PyTorch 1.6 and ONNX 1.7 and produce an optimal ONNX graph -for hosting in the ONNX runtime (see onnx_validate.py). To export an ONNX model compatible -with caffe2 (see caffe2_benchmark.py and caffe2_validate.py), the --keep-init and --aten-fallback -flags are currently required. - -Older versions of PyTorch/ONNX (tested PyTorch 1.4, ONNX 1.5) do not need extra flags for -caffe2 compatibility, but they produce a model that isn't as fast running on ONNX runtime. - -Most new release of PyTorch and ONNX cause some sort of breakage in the export / usage of ONNX models. -Please do your research and search ONNX and PyTorch issue tracker before asking me. Thanks. - -Copyright 2020 Ross Wightman -""" -import argparse -import torch -import numpy as np - -import onnx -import geffnet - -parser = argparse.ArgumentParser(description='PyTorch ImageNet Validation') -parser.add_argument('output', metavar='ONNX_FILE', - help='output model filename') -parser.add_argument('--model', '-m', metavar='MODEL', default='mobilenetv3_large_100', - help='model architecture (default: mobilenetv3_large_100)') -parser.add_argument('--opset', type=int, default=10, - help='ONNX opset to use (default: 10)') -parser.add_argument('--keep-init', action='store_true', default=False, - help='Keep initializers as input. Needed for Caffe2 compatible export in newer PyTorch/ONNX.') -parser.add_argument('--aten-fallback', action='store_true', default=False, - help='Fallback to ATEN ops. Helps fix AdaptiveAvgPool issue with Caffe2 in newer PyTorch/ONNX.') -parser.add_argument('--dynamic-size', action='store_true', default=False, - help='Export model width dynamic width/height. Not recommended for "tf" models with SAME padding.') -parser.add_argument('-b', '--batch-size', default=1, type=int, - metavar='N', help='mini-batch size (default: 1)') -parser.add_argument('--img-size', default=None, type=int, - metavar='N', help='Input image dimension, uses model default if empty') -parser.add_argument('--mean', type=float, nargs='+', default=None, metavar='MEAN', - help='Override mean pixel value of dataset') -parser.add_argument('--std', type=float, nargs='+', default=None, metavar='STD', - help='Override std deviation of of dataset') -parser.add_argument('--num-classes', type=int, default=1000, - help='Number classes in dataset') -parser.add_argument('--checkpoint', default='', type=str, metavar='PATH', - help='path to checkpoint (default: none)') - - -def main(): - args = parser.parse_args() - - args.pretrained = True - if args.checkpoint: - args.pretrained = False - - print("==> Creating PyTorch {} model".format(args.model)) - # NOTE exportable=True flag disables autofn/jit scripted activations and uses Conv2dSameExport layers - # for models using SAME padding - model = geffnet.create_model( - args.model, - num_classes=args.num_classes, - in_chans=3, - pretrained=args.pretrained, - checkpoint_path=args.checkpoint, - exportable=True) - - model.eval() - - example_input = torch.randn((args.batch_size, 3, args.img_size or 224, args.img_size or 224), requires_grad=True) - - # Run model once before export trace, sets padding for models with Conv2dSameExport. This means - # that the padding for models with Conv2dSameExport (most models with tf_ prefix) is fixed for - # the input img_size specified in this script. - # Opset >= 11 should allow for dynamic padding, however I cannot get it to work due to - # issues in the tracing of the dynamic padding or errors attempting to export the model after jit - # scripting it (an approach that should work). Perhaps in a future PyTorch or ONNX versions... - model(example_input) - - print("==> Exporting model to ONNX format at '{}'".format(args.output)) - input_names = ["input0"] - output_names = ["output0"] - dynamic_axes = {'input0': {0: 'batch'}, 'output0': {0: 'batch'}} - if args.dynamic_size: - dynamic_axes['input0'][2] = 'height' - dynamic_axes['input0'][3] = 'width' - if args.aten_fallback: - export_type = torch.onnx.OperatorExportTypes.ONNX_ATEN_FALLBACK - else: - export_type = torch.onnx.OperatorExportTypes.ONNX - - torch_out = torch.onnx._export( - model, example_input, args.output, export_params=True, verbose=True, input_names=input_names, - output_names=output_names, keep_initializers_as_inputs=args.keep_init, dynamic_axes=dynamic_axes, - opset_version=args.opset, operator_export_type=export_type) - - print("==> Loading and checking exported model from '{}'".format(args.output)) - onnx_model = onnx.load(args.output) - onnx.checker.check_model(onnx_model) # assuming throw on error - print("==> Passed") - - if args.keep_init and args.aten_fallback: - import caffe2.python.onnx.backend as onnx_caffe2 - # Caffe2 loading only works properly in newer PyTorch/ONNX combos when - # keep_initializers_as_inputs and aten_fallback are set to True. - print("==> Loading model into Caffe2 backend and comparing forward pass.".format(args.output)) - caffe2_backend = onnx_caffe2.prepare(onnx_model) - B = {onnx_model.graph.input[0].name: x.data.numpy()} - c2_out = caffe2_backend.run(B)[0] - np.testing.assert_almost_equal(torch_out.data.numpy(), c2_out, decimal=5) - print("==> Passed") - - -if __name__ == '__main__': - main() diff --git a/spaces/Surfrider/surfnet/tracking/__init__.py b/spaces/Surfrider/surfnet/tracking/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/TRaw/starchat-assist/README.md b/spaces/TRaw/starchat-assist/README.md deleted file mode 100644 index 0f1bab38fafa8b0d30166007395b55dbafd27237..0000000000000000000000000000000000000000 --- a/spaces/TRaw/starchat-assist/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Starchat Assist -emoji: 🏢 -colorFrom: green -colorTo: purple -sdk: gradio -sdk_version: 3.38.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Talo88/Tumer-Detection/README.md b/spaces/Talo88/Tumer-Detection/README.md deleted file mode 100644 index 7502765ff83ada56fc06925636d1d569fd44da00..0000000000000000000000000000000000000000 --- a/spaces/Talo88/Tumer-Detection/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Tumer Detection -emoji: ⚡ -colorFrom: blue -colorTo: blue -sdk: streamlit -sdk_version: 1.28.1 -app_file: app.py -pinned: false -python_version: 3.10.12 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/TandCAcceptMe/face-swap-docker/plugins/plugin_txt2clip.py b/spaces/TandCAcceptMe/face-swap-docker/plugins/plugin_txt2clip.py deleted file mode 100644 index f330f83837c0a237cc2e7d95c493000cb595c94a..0000000000000000000000000000000000000000 --- a/spaces/TandCAcceptMe/face-swap-docker/plugins/plugin_txt2clip.py +++ /dev/null @@ -1,122 +0,0 @@ -import os -import cv2 -import numpy as np -import torch -import threading -from chain_img_processor import ChainImgProcessor, ChainImgPlugin -from torchvision import transforms -from clip.clipseg import CLIPDensePredT -from numpy import asarray - - -THREAD_LOCK_CLIP = threading.Lock() - -modname = os.path.basename(__file__)[:-3] # calculating modname - -model_clip = None - - - - -# start function -def start(core:ChainImgProcessor): - manifest = { # plugin settings - "name": "Text2Clip", # name - "version": "1.0", # version - - "default_options": { - }, - "img_processor": { - "txt2clip": Text2Clip - } - } - return manifest - -def start_with_options(core:ChainImgProcessor, manifest:dict): - pass - - - -class Text2Clip(ChainImgPlugin): - - def load_clip_model(self): - global model_clip - - if model_clip is None: - device = torch.device(super().device) - model_clip = CLIPDensePredT(version='ViT-B/16', reduce_dim=64, complex_trans_conv=True) - model_clip.eval(); - model_clip.load_state_dict(torch.load('models/CLIP/rd64-uni-refined.pth', map_location=torch.device('cpu')), strict=False) - model_clip.to(device) - - - def init_plugin(self): - self.load_clip_model() - - def process(self, frame, params:dict): - if "face_detected" in params: - if not params["face_detected"]: - return frame - - return self.mask_original(params["original_frame"], frame, params["clip_prompt"]) - - - def mask_original(self, img1, img2, keywords): - global model_clip - - source_image_small = cv2.resize(img1, (256,256)) - - img_mask = np.full((source_image_small.shape[0],source_image_small.shape[1]), 0, dtype=np.float32) - mask_border = 1 - l = 0 - t = 0 - r = 1 - b = 1 - - mask_blur = 5 - clip_blur = 5 - - img_mask = cv2.rectangle(img_mask, (mask_border+int(l), mask_border+int(t)), - (256 - mask_border-int(r), 256-mask_border-int(b)), (255, 255, 255), -1) - img_mask = cv2.GaussianBlur(img_mask, (mask_blur*2+1,mask_blur*2+1), 0) - img_mask /= 255 - - - input_image = source_image_small - - transform = transforms.Compose([ - transforms.ToTensor(), - transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]), - transforms.Resize((256, 256)), - ]) - img = transform(input_image).unsqueeze(0) - - thresh = 0.5 - prompts = keywords.split(',') - with THREAD_LOCK_CLIP: - with torch.no_grad(): - preds = model_clip(img.repeat(len(prompts),1,1,1), prompts)[0] - clip_mask = torch.sigmoid(preds[0][0]) - for i in range(len(prompts)-1): - clip_mask += torch.sigmoid(preds[i+1][0]) - - clip_mask = clip_mask.data.cpu().numpy() - np.clip(clip_mask, 0, 1) - - clip_mask[clip_mask>thresh] = 1.0 - clip_mask[clip_mask<=thresh] = 0.0 - kernel = np.ones((5, 5), np.float32) - clip_mask = cv2.dilate(clip_mask, kernel, iterations=1) - clip_mask = cv2.GaussianBlur(clip_mask, (clip_blur*2+1,clip_blur*2+1), 0) - - img_mask *= clip_mask - img_mask[img_mask<0.0] = 0.0 - - img_mask = cv2.resize(img_mask, (img2.shape[1], img2.shape[0])) - img_mask = np.reshape(img_mask, [img_mask.shape[0],img_mask.shape[1],1]) - - target = img2.astype(np.float32) - result = (1-img_mask) * target - result += img_mask * img1.astype(np.float32) - return np.uint8(result) - diff --git a/spaces/Tej3/ECG_Classification/models/RNN.py b/spaces/Tej3/ECG_Classification/models/RNN.py deleted file mode 100644 index cefd92f5e911f0ac942ffca4c0c5013bcce6bdd2..0000000000000000000000000000000000000000 --- a/spaces/Tej3/ECG_Classification/models/RNN.py +++ /dev/null @@ -1,71 +0,0 @@ -import torch -import torch.nn as nn - - -class RNN(nn.Module): - def __init__(self, input_dim=12, hidden_dim=64, num_layers=2, num_classes=5, cuda=True, device='cuda'): - super(RNN, self).__init__() - self.hidden_dim = hidden_dim - self.num_layers = num_layers - self.device = device - - self.lstm = nn.LSTM(input_size=input_dim, hidden_size=self.hidden_dim, - num_layers=self.num_layers, batch_first=True) - self.fc1 = nn.Linear(self.hidden_dim, self.hidden_dim) - self.fc2 = nn.Linear(self.hidden_dim, num_classes) - self.relu = nn.ReLU() - - def forward(self, x, notes): - h = torch.zeros(self.num_layers, x.size(0), self.hidden_dim) - c = torch.zeros(self.num_layers, x.size(0), self.hidden_dim) - - nn.init.xavier_normal_(h) - nn.init.xavier_normal_(c) - h = h.to(self.device) - c = c.to(self.device) - x = x.to(self.device) - - output, _ = self.lstm(x, (h, c)) - - out = self.fc2(self.relu(self.fc1(output[:, -1, :]))) - - return out - - -class MMRNN(nn.ModuleList): - def __init__(self, input_dim=12, hidden_dim=64, num_layers=2, num_classes=5, embed_size=768, device="cuda"): - super(MMRNN, self).__init__() - self.hidden_dim = hidden_dim - self.num_layers = num_layers - self.device = device - - self.lstm = nn.LSTM(input_size=input_dim, hidden_size=self.hidden_dim, - num_layers=self.num_layers, batch_first=True) - self.fc1 = nn.Linear(self.hidden_dim, embed_size) - self.fc2 = nn.Linear(embed_size, num_classes) - - self.lnorm_out = nn.LayerNorm(embed_size) - self.lnorm_embed = nn.LayerNorm(embed_size) - - def forward(self, x, note): - h = torch.zeros(self.num_layers, x.size(0), self.hidden_dim) - c = torch.zeros(self.num_layers, x.size(0), self.hidden_dim) - - nn.init.xavier_normal_(h) - nn.init.xavier_normal_(c) - h = h.to(self.device) - c = c.to(self.device) - x = x.to(self.device) - note = note.to(self.device) - - output, _ = self.lstm(x, (h, c)) - # Take last hidden state - out = self.fc1(output[:, -1, :]) - - note = self.lnorm_embed(note) - out = self.lnorm_out(out) - out = note + out - - out = self.fc2(out) - - return out.squeeze(1) diff --git a/spaces/Tonic1/falcon-180b-demo/README.md b/spaces/Tonic1/falcon-180b-demo/README.md deleted file mode 100644 index 04189396d29fcc4721c66850250efc7c85a18276..0000000000000000000000000000000000000000 --- a/spaces/Tonic1/falcon-180b-demo/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: Falcon-180B Demo -emoji: 💬 -colorFrom: indigo -colorTo: purple -sdk: gradio -sdk_version: 3.42.0 -app_file: app.py -duplicated_from: tiiuae/falcon-180b-demo ---- diff --git a/spaces/Torcat/torcat-test/config.py b/spaces/Torcat/torcat-test/config.py deleted file mode 100644 index 70c110bba3c870510dd6844472207194423c28a8..0000000000000000000000000000000000000000 --- a/spaces/Torcat/torcat-test/config.py +++ /dev/null @@ -1,11 +0,0 @@ -import os - -# FOR MODELS -MODELS_FOLDER_PATH = os.path.join(os.path.dirname(__file__), 'models') - -# FOR OPTIONS -OPTIONS = { - 'normal': 'Normal', - 'segmentation_2_x_2': 'Segmentation 2x2', - 'segmentation_4_x_4': 'Segmentation 4x4' -} \ No newline at end of file diff --git a/spaces/TusharNautiyal/Dynamic-Movie-Recommender-With-Sentiment-Analysis/README.md b/spaces/TusharNautiyal/Dynamic-Movie-Recommender-With-Sentiment-Analysis/README.md deleted file mode 100644 index 699df93bea860210ebeba74f98da24bbbf5cf39e..0000000000000000000000000000000000000000 --- a/spaces/TusharNautiyal/Dynamic-Movie-Recommender-With-Sentiment-Analysis/README.md +++ /dev/null @@ -1,72 +0,0 @@ ---- -title: Dynamic Movie Recommender With Sentiment Analysis -emoji: 🚀 -colorFrom: blue -colorTo: purple -sdk: streamlit -sdk_version: 1.10.0 -app_file: app.py -pinned: false -license: mit ---- -# Dynamic NLP Model Movie-Recommender-system With Sentiment Analysis -***Check Deployment*** - - - -Content Based Movie Recommender System Using NLP Dynamic model selection between Bert Pre Trained Model , Bag of Words, TF-IDF, Word2Vec, And TF-IDF+Word2Vec on TMDB Dataset. -This movie recommender was created to better understand each and every NLP model based recommendation working and effective ness based on multiple paramenters. -For each movie you can also enter a review on which a Sentiment Analysis model will work and tell if your review was a good or bad one. -Sometimes if not able to find movie recommendation just try to refresh and do it again one more time or change the name to a similar name of a moive. -If you like this repository do star it. -# How to Use -Models are loaded using cloud pickle and **session states** are used to stop model from getting downloaded again and again this increases loading speed and loading times. App is created using **streamlit**. Below is a quick demonstration of how it works. - -***Run This command in CLI*** - -``` -streamlit run app.py -``` - -**Recommender Demo**: - -https://user-images.githubusercontent.com/74553737/193135421-80a4c790-d14e-4322-982c-36ec7a16aea9.mp4 - -Sometimes index are not found because either the movie poster is not avalible in the api or the name of the movie was not able to found try to add some variations in your name for eg pirates, carrabiean, sea, monster words that can be in a movie. - -**Sentiment Analysis Demo**: - -https://user-images.githubusercontent.com/74553737/193136299-185453fa-3235-49a3-99df-c7c2f45ff19c.mp4 - -Try to write review with more words for better sentiment analysis recommender 20-50 words. We have trained model on **random forest** as it was giving good accuracy and **Tf-idf** vecotrizer for sentiment analysis model. For more you can check the notebook. - -# Understanding TF-IDF with Word2Vec Embeddings. - -**TF-IDF** is a term frequency-inverse document frequency. It helps to calculate the importance of a given word relative to other words in the document and in the corpus. It calculates in two quantities, TF and IDF. Combining two will give a TF-IDF score. - -Calculate the TF-IDF vector for each word in corpus. Let’s call the **TF-IDF** vectors as ***tf1, tf2, tf3, ... and so on*** till n. - -![chart](https://user-images.githubusercontent.com/74553737/193222385-02e7c10d-2589-4539-a981-3bb398fc4d38.png) - -After that we can Calculate the **Word2Vec** for each word in the description lets call it as ***W2V1,W2V2,W2V3..........and so on*** till n. - -![chart](https://user-images.githubusercontent.com/74553737/193222528-e04ef47b-5725-4ee9-a4da-8a6ff72bd64c.png) - - -**Multiply** the ***TF-IDF*** score and ***Word2Vec vector*** representation of each word and **sum** all of it. - -![chart](https://user-images.githubusercontent.com/74553737/193222659-aba7160d-db53-4b45-9915-a13608b8c254.png) - - -Then **divide** the total by sum of TF-IDF vectors.These will be our new vectors that we will use for our cosine similarity to create a recommender model. - -Considering each word with i and total words as n. **The Complete Formula will be** - -![chart](https://user-images.githubusercontent.com/74553737/193220196-d32d1ac3-3aae-40b5-a1c4-a52cd1b27a4d.png) - -/ This sign means divide and this Formula image was created using atomurl.net. For more detailed understanding on ***tf-idf+word2vec*** ***Follow me on medium*** where i have posted a full article on it. - -# Updates -This project is deployed on hugging face spaces here is the link for the deployed applications ***Check Deployment*** - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/VIOD/anime-ai-detect/README.md b/spaces/VIOD/anime-ai-detect/README.md deleted file mode 100644 index 952c183fd69ccb1664b4236b6132fc6d0358c7de..0000000000000000000000000000000000000000 --- a/spaces/VIOD/anime-ai-detect/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Anime Ai Detect -emoji: 🤖 -colorFrom: green -colorTo: purple -sdk: gradio -sdk_version: 3.15.0 -app_file: app.py -pinned: true -duplicated_from: saltacc/anime-ai-detect ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Vasanth/QuestionAnswering/README.md b/spaces/Vasanth/QuestionAnswering/README.md deleted file mode 100644 index afa68272b9c61e5f43a92873cb6dc3cfce935bb6..0000000000000000000000000000000000000000 --- a/spaces/Vasanth/QuestionAnswering/README.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -title: QuestionAnswering -emoji: 🏃 -colorFrom: indigo -colorTo: green -sdk: streamlit -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/VeryYouQ/dis-background-removal/app.py b/spaces/VeryYouQ/dis-background-removal/app.py deleted file mode 100644 index f9b5d48b0d92f5256d0309c08532df3a60cf2628..0000000000000000000000000000000000000000 --- a/spaces/VeryYouQ/dis-background-removal/app.py +++ /dev/null @@ -1,155 +0,0 @@ -import cv2 -import gradio as gr -import os -from PIL import Image -import numpy as np -import torch -from torch.autograd import Variable -from torchvision import transforms -import torch.nn.functional as F -import gdown -import matplotlib.pyplot as plt -import warnings -warnings.filterwarnings("ignore") - -os.system("git clone https://github.com/xuebinqin/DIS") -os.system("mv DIS/IS-Net/* .") - -# project imports -from data_loader_cache import normalize, im_reader, im_preprocess -from models import * - -#Helpers -device = 'cuda' if torch.cuda.is_available() else 'cpu' - -# Download official weights -if not os.path.exists("saved_models"): - os.mkdir("saved_models") - MODEL_PATH_URL = "https://drive.google.com/uc?id=1KyMpRjewZdyYfxHPYcd-ZbanIXtin0Sn" - gdown.download(MODEL_PATH_URL, "saved_models/isnet.pth", use_cookies=False) - -class GOSNormalize(object): - ''' - Normalize the Image using torch.transforms - ''' - def __init__(self, mean=[0.485,0.456,0.406], std=[0.229,0.224,0.225]): - self.mean = mean - self.std = std - - def __call__(self,image): - image = normalize(image,self.mean,self.std) - return image - - -transform = transforms.Compose([GOSNormalize([0.5,0.5,0.5],[1.0,1.0,1.0])]) - -def load_image(im_path, hypar): - im = im_reader(im_path) - im, im_shp = im_preprocess(im, hypar["cache_size"]) - im = torch.divide(im,255.0) - shape = torch.from_numpy(np.array(im_shp)) - return transform(im).unsqueeze(0), shape.unsqueeze(0) # make a batch of image, shape - - -def build_model(hypar,device): - net = hypar["model"]#GOSNETINC(3,1) - - # convert to half precision - if(hypar["model_digit"]=="half"): - net.half() - for layer in net.modules(): - if isinstance(layer, nn.BatchNorm2d): - layer.float() - - net.to(device) - - if(hypar["restore_model"]!=""): - net.load_state_dict(torch.load(hypar["model_path"]+"/"+hypar["restore_model"], map_location=device)) - net.to(device) - net.eval() - return net - - -def predict(net, inputs_val, shapes_val, hypar, device): - ''' - Given an Image, predict the mask - ''' - net.eval() - - if(hypar["model_digit"]=="full"): - inputs_val = inputs_val.type(torch.FloatTensor) - else: - inputs_val = inputs_val.type(torch.HalfTensor) - - - inputs_val_v = Variable(inputs_val, requires_grad=False).to(device) # wrap inputs in Variable - - ds_val = net(inputs_val_v)[0] # list of 6 results - - pred_val = ds_val[0][0,:,:,:] # B x 1 x H x W # we want the first one which is the most accurate prediction - - ## recover the prediction spatial size to the orignal image size - pred_val = torch.squeeze(F.upsample(torch.unsqueeze(pred_val,0),(shapes_val[0][0],shapes_val[0][1]),mode='bilinear')) - - ma = torch.max(pred_val) - mi = torch.min(pred_val) - pred_val = (pred_val-mi)/(ma-mi) # max = 1 - - if device == 'cuda': torch.cuda.empty_cache() - return (pred_val.detach().cpu().numpy()*255).astype(np.uint8) # it is the mask we need - -# Set Parameters -hypar = {} # paramters for inferencing - - -hypar["model_path"] ="./saved_models" ## load trained weights from this path -hypar["restore_model"] = "isnet.pth" ## name of the to-be-loaded weights -hypar["interm_sup"] = False ## indicate if activate intermediate feature supervision - -## choose floating point accuracy -- -hypar["model_digit"] = "full" ## indicates "half" or "full" accuracy of float number -hypar["seed"] = 0 - -hypar["cache_size"] = [1024, 1024] ## cached input spatial resolution, can be configured into different size - -## data augmentation parameters --- -hypar["input_size"] = [1024, 1024] ## mdoel input spatial size, usually use the same value hypar["cache_size"], which means we don't further resize the images -hypar["crop_size"] = [1024, 1024] ## random crop size from the input, it is usually set as smaller than hypar["cache_size"], e.g., [920,920] for data augmentation - -hypar["model"] = ISNetDIS() - - # Build Model -net = build_model(hypar, device) - - -def inference(image: Image): - image_path = image - - image_tensor, orig_size = load_image(image_path, hypar) - mask = predict(net, image_tensor, orig_size, hypar, device) - - pil_mask = Image.fromarray(mask).convert("L") - im_rgb = Image.open(image).convert("RGB") - - im_rgba = im_rgb.copy() - im_rgba.putalpha(pil_mask) - - return [im_rgba, pil_mask] - - -title = "Highly Accurate Dichotomous Image Segmentation" -description = "This is an unofficial demo for DIS, a model that can remove the background from a given image. To use it, simply upload your image, or click one of the examples to load them. Read more at the links below.
GitHub: https://github.com/xuebinqin/DIS
Telegram bot: https://t.me/restoration_photo_bot
[![](https://img.shields.io/twitter/follow/DoEvent?label=@DoEvent&style=social)](https://twitter.com/DoEvent)" -article = "
visitor badge
" - -interface = gr.Interface( - fn=inference, - inputs=gr.Image(type='filepath'), - outputs=["image", "image"], - examples=[['robot.png'], ['ship.png']], - title=title, - description=description, - article=article, - allow_flagging='never', - theme="default", - cache_examples=False, - ).launch(enable_queue=True, debug=True) diff --git a/spaces/Whatcoldwind/csgo_investment/api/__init__.py b/spaces/Whatcoldwind/csgo_investment/api/__init__.py deleted file mode 100644 index 2966ecf8b1db894b34255a37417a0aca3f2d5cbb..0000000000000000000000000000000000000000 --- a/spaces/Whatcoldwind/csgo_investment/api/__init__.py +++ /dev/null @@ -1,291 +0,0 @@ -"""buff api""" -import requests -import json -import pickle -import os - -class Goods: - def __init__(self, goods_id, cost=0,token=''): - self.index = 0 - self.id = goods_id # buff id - self.youpin_id = 0 - - self.name = '' # name - self.cost = cost # 购入花费 - self.price = 0 # buff当前价格 - self.steam_price = 0 # steam当前价格 - - self.status = 0 # 0:在库中 1:租出 2:卖出 - self.token=token # youpin 登录token - self.on_sale_count = 0 # youpin在售 - self.on_lease_count = 0 # youpin租出 - self.lease_unit_price = 0 # youpin短租金 - self.long_lease_unit_price = 0 # youpin长租金 - self.youpin_price = 0 # youpin当前价格 - self.deposit = 0 # 押金 - self.sell_price = 0 # 卖出价格 - self.__get_buff() - self.__get_youpin() - - def __get_buff(self): - url = ( - 'https://buff.163.com/api/market/goods/sell_order?game=csgo&goods_id=' - + self.id - ) - r = requests.get(url) - if r.status_code == 200: - data = r.json() - self.price = eval(data['data']['items'][0]['price']) - self.name = data['data']['goods_infos'][self.id]['name'] - self.steam_price = eval( - data['data']['goods_infos'][self.id]['steam_price_cny'] - ) - return True - else: - return False - - def __get_youpin(self): - url = "https://api.youpin898.com/api/homepage/es/template/GetCsGoPagedList" - payload = json.dumps( - { - "listType": "30", - "gameId": "730", - "keyWords": self.name, - "pageIndex": 1, - "pageSize": 20, - "sortType": "0", - "listSortType": "2", - } - ) - headers = { - "accept": "application/json, text/plain, */*", - "accept-language": "zh-CN,zh;q=0.9,en;q=0.8,en-GB;q=0.7,en-US;q=0.6", - "apptype": "1", - "authorization": self.token, - "content-type": "application/json", - "sec-ch-ua": "\"Chromium\";v=\"110\", \"Not A(Brand\";v=\"24\", \"Microsoft Edge\";v=\"110\"", - "sec-ch-ua-mobile": "?0", - "sec-ch-ua-platform": "\"Windows\"", - "sec-fetch-dest": "empty", - "sec-fetch-mode": "cors", - "sec-fetch-site": "same-site", - "Referer": "https://www.youpin898.com/", - "Referrer-Policy": "strict-origin-when-cross-origin" - } - response = requests.request("POST", url, headers=headers, data=payload).json() - idx=0 - while idx 0.1: - idx+=1 - continue - break - self.youpin_id = response['Data'][idx]['Id'] - self.on_sale_count = response['Data'][idx]["OnSaleCount"] # youpin在售 - self.on_lease_count = response['Data'][idx]["OnLeaseCount"] # youpin租出 - self.lease_unit_price = eval(response['Data'][idx]["LeaseUnitPrice"]) # youpin短租金 - self.long_lease_unit_price = eval( - response['Data'][idx]["LongLeaseUnitPrice"] - ) # youpin长租金 - - self.deposit = eval(response['Data'][idx]["LeaseDeposit"]) # 押金 - - def refresh(self): - self.__get_buff() - self.__get_youpin() - - def sell(self, price): - self.status = 2 - self.sell_price = price - - def lease(self): - self.status = 1 - - def back(self): - self.status = 0 - - def get_status(self): - if self.status == 0 and self.cost != 0: - return "在库中" - elif self.status == 1: - return "租出" - elif self.status == 0 and self.cost == 0: - return "观望中" - else: - return "卖出" - - def __call__(self): - if self.cost == 0: - return { - "BuffId": self.id, - "YoupinId": self.youpin_id, - "Name": self.name, - "Cost": self.cost, - "BuffPrice": self.price, - "YoupinPrice": self.youpin_price, - "SteamPrice": self.steam_price, - "Status": self.status, - "OnSaleCount": self.on_sale_count, - "OnLeaseCount": self.on_lease_count, - "LeaseUnitPrice": self.lease_unit_price, - "LongLeaseUnitPrice": self.long_lease_unit_price, - "Deposit": self.deposit, - "RentSaleRatio": self.on_lease_count / self.on_sale_count, # 目前租售比 - "LeaseRatio": self.lease_unit_price / self.price * 100, # 租金比例 - "DepositRatio": self.deposit / self.price * 100, # 押金比例 - "AnnualizedShortTermLeaseRatio": 192 - * self.lease_unit_price - / self.price - * 100, # 年化短租比例 - "AnnualizedLongTermLeaseRatio": 264 - * self.long_lease_unit_price - / self.price - * 100, # 年化长租比例 - "CashRatio": self.price / self.steam_price * 100, # 套现比例 - "BuffYouyouRatio": self.price / self.youpin_price, # buff和有品价格比例 - } - else: - return { - "BuffId": self.id, - "YoupinId": self.youpin_id, - "Name": self.name, - "Cost": self.cost, - "BuffPrice": self.price, - "YoupinPrice": self.youpin_price, - "SteamPrice": self.steam_price, - "Status": self.status, - "OnSaleCount": self.on_sale_count, - "OnLeaseCount": self.on_lease_count, - "LeaseUnitPrice": self.lease_unit_price, - "LongLeaseUnitPrice": self.long_lease_unit_price, - "Deposit": self.deposit, - "RentSaleRatio": self.on_lease_count / self.on_sale_count, # 目前租售比 - "TheoreticalCurrentEarnings": self.price - self.cost, # 理论目前收益 - "TheoreticalCurrentEarningsRate": (self.price - self.cost) - / self.cost - * 100, # 理论目前收益率 - "LeaseRatio": self.lease_unit_price / self.price * 100, # 租金比例 - "DepositRatio": self.deposit / self.price * 100, # 押金比例 - "AnnualizedShortTermLeaseRatio": 192 - * self.lease_unit_price - / self.price - * 100, # 年化短租比例 - "AnnualizedLongTermLeaseRatio": 264 - * self.long_lease_unit_price - / self.price - * 100, # 年化长租比例 - "CashRatio": self.price / self.steam_price * 100, # 套现比例 - "BuffYouyouRatio": self.price / self.youpin_price, # buff和有品价格比例 - } - - -class Inventory: - """库存管理""" - - def __init__(self, path) -> None: - """选择一个库存并启动该库存""" - self.path = path - if os.path.exists(path): - self.__data = pickle.load(open(path, "rb")) - else: - self.__data = {} - - def __call__(self): - return self.__data - - def __iter__(self): - return self.__data.__iter__() - - def add(self, good: Goods): - if good.__class__ == Goods: - good.index = len(self()) - self.__data[good.index] = good - else: - raise TypeError("输入类型错误") - - def delete(self, good): - del self()[good] - - def save(self): - pickle.dump(self.__data, open(self.path, "wb")) - - def reset(self): - self.__data = [] - - def total_cost(self): - return sum([self()[good].cost for good in self()]) - - def total_cost_in_inventory(self): - return sum( - [ - self()[good].cost - for good in self() - if (self()[good].status == 0 and self()[good].cost != 0) - or self()[good].status == 1 - ] - ) - - def calc_buff_earn(self): - return sum( - [ - self()[good].price - self()[good].cost - for good in self() - if (self()[good].cost != 0 & self()[good].status == 0) or self()[good].status == 1 - ] - ) - - def calc_youpin_earn(self): - return sum( - [ - self()[good].youpin_price - self()[good].cost - for good in self() - if (self()[good].cost != 0 & self()[good].status == 0) or self()[good].status == 1 - ] - ) - - def calc_buff_earn_rate(self): - return self.calc_buff_earn() / self.total_cost_in_inventory() * 100 - - def calc_youpin_earn_rate(self): - return self.calc_youpin_earn() / self.total_cost_in_inventory() * 100 - - def calc_price(self): - return sum( - [ - self()[good].price - for good in self() - if (self()[good].status == 0 and self()[good].cost != 0) - or self()[good].status == 1 - ] - ) - - def calc_yyyp_price(self): - return sum( - [ - self()[good].youpin_price - for good in self() - if (self()[good].status == 0 and self()[good].cost != 0) - or self()[good].status == 1 - ] - ) - - def sell_earn(self): - return sum( - [self()[good].sell_price for good in self() if self()[good].status == 2] - ) - sum( - [self()[good].cost for good in self() if self()[good].status == 2] - ) - - def sell_price(self): - return sum( - [self()[good].sell_price for good in self() if self()[good].status == 2] - ) - - -def test_tokens(token): - try: - tmp = Goods('33912','1188',token) - tmp.refresh() - return True - except: - return False \ No newline at end of file diff --git a/spaces/XzJosh/Eileen-Bert-VITS2/text/symbols.py b/spaces/XzJosh/Eileen-Bert-VITS2/text/symbols.py deleted file mode 100644 index 9dfae4e633829f20c4fd767b1c7a9198911ed801..0000000000000000000000000000000000000000 --- a/spaces/XzJosh/Eileen-Bert-VITS2/text/symbols.py +++ /dev/null @@ -1,51 +0,0 @@ -punctuation = ['!', '?', '…', ",", ".", "'", '-'] -pu_symbols = punctuation + ["SP", "UNK"] -pad = '_' - -# chinese -zh_symbols = ['E', 'En', 'a', 'ai', 'an', 'ang', 'ao', 'b', 'c', 'ch', 'd', 'e', 'ei', 'en', 'eng', 'er', 'f', 'g', 'h', - 'i', 'i0', 'ia', 'ian', 'iang', 'iao', 'ie', 'in', 'ing', 'iong', 'ir', 'iu', 'j', 'k', 'l', 'm', 'n', 'o', - 'ong', - 'ou', 'p', 'q', 'r', 's', 'sh', 't', 'u', 'ua', 'uai', 'uan', 'uang', 'ui', 'un', 'uo', 'v', 'van', 've', 'vn', - 'w', 'x', 'y', 'z', 'zh', - "AA", "EE", "OO"] -num_zh_tones = 6 - -# japanese -ja_symbols = ['I', 'N', 'U', 'a', 'b', 'by', 'ch', 'cl', 'd', 'dy', 'e', 'f', 'g', 'gy', 'h', 'hy', 'i', 'j', 'k', 'ky', - 'm', 'my', 'n', 'ny', 'o', 'p', 'py', 'r', 'ry', 's', 'sh', 't', 'ts', 'u', 'V', 'w', 'y', 'z'] -num_ja_tones = 1 - -# English -en_symbols = ['aa', 'ae', 'ah', 'ao', 'aw', 'ay', 'b', 'ch', 'd', 'dh', 'eh', 'er', 'ey', 'f', 'g', 'hh', 'ih', 'iy', - 'jh', 'k', 'l', 'm', 'n', 'ng', 'ow', 'oy', 'p', 'r', 's', - 'sh', 't', 'th', 'uh', 'uw', 'V', 'w', 'y', 'z', 'zh'] -num_en_tones = 4 - -# combine all symbols -normal_symbols = sorted(set(zh_symbols + ja_symbols + en_symbols)) -symbols = [pad] + normal_symbols + pu_symbols -sil_phonemes_ids = [symbols.index(i) for i in pu_symbols] - -# combine all tones -num_tones = num_zh_tones + num_ja_tones + num_en_tones - -# language maps -language_id_map = { - 'ZH': 0, - "JA": 1, - "EN": 2 -} -num_languages = len(language_id_map.keys()) - -language_tone_start_map = { - 'ZH': 0, - "JA": num_zh_tones, - "EN": num_zh_tones + num_ja_tones -} - -if __name__ == '__main__': - a = set(zh_symbols) - b = set(en_symbols) - print(sorted(a&b)) - diff --git a/spaces/XzJosh/Eileen-Bert-VITS2/text/tone_sandhi.py b/spaces/XzJosh/Eileen-Bert-VITS2/text/tone_sandhi.py deleted file mode 100644 index 0f45b7a72c5d858bcaab19ac85cfa686bf9a74da..0000000000000000000000000000000000000000 --- a/spaces/XzJosh/Eileen-Bert-VITS2/text/tone_sandhi.py +++ /dev/null @@ -1,351 +0,0 @@ -# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -from typing import List -from typing import Tuple - -import jieba -from pypinyin import lazy_pinyin -from pypinyin import Style - - -class ToneSandhi(): - def __init__(self): - self.must_neural_tone_words = { - '麻烦', '麻利', '鸳鸯', '高粱', '骨头', '骆驼', '马虎', '首饰', '馒头', '馄饨', '风筝', - '难为', '队伍', '阔气', '闺女', '门道', '锄头', '铺盖', '铃铛', '铁匠', '钥匙', '里脊', - '里头', '部分', '那么', '道士', '造化', '迷糊', '连累', '这么', '这个', '运气', '过去', - '软和', '转悠', '踏实', '跳蚤', '跟头', '趔趄', '财主', '豆腐', '讲究', '记性', '记号', - '认识', '规矩', '见识', '裁缝', '补丁', '衣裳', '衣服', '衙门', '街坊', '行李', '行当', - '蛤蟆', '蘑菇', '薄荷', '葫芦', '葡萄', '萝卜', '荸荠', '苗条', '苗头', '苍蝇', '芝麻', - '舒服', '舒坦', '舌头', '自在', '膏药', '脾气', '脑袋', '脊梁', '能耐', '胳膊', '胭脂', - '胡萝', '胡琴', '胡同', '聪明', '耽误', '耽搁', '耷拉', '耳朵', '老爷', '老实', '老婆', - '老头', '老太', '翻腾', '罗嗦', '罐头', '编辑', '结实', '红火', '累赘', '糨糊', '糊涂', - '精神', '粮食', '簸箕', '篱笆', '算计', '算盘', '答应', '笤帚', '笑语', '笑话', '窟窿', - '窝囊', '窗户', '稳当', '稀罕', '称呼', '秧歌', '秀气', '秀才', '福气', '祖宗', '砚台', - '码头', '石榴', '石头', '石匠', '知识', '眼睛', '眯缝', '眨巴', '眉毛', '相声', '盘算', - '白净', '痢疾', '痛快', '疟疾', '疙瘩', '疏忽', '畜生', '生意', '甘蔗', '琵琶', '琢磨', - '琉璃', '玻璃', '玫瑰', '玄乎', '狐狸', '状元', '特务', '牲口', '牙碜', '牌楼', '爽快', - '爱人', '热闹', '烧饼', '烟筒', '烂糊', '点心', '炊帚', '灯笼', '火候', '漂亮', '滑溜', - '溜达', '温和', '清楚', '消息', '浪头', '活泼', '比方', '正经', '欺负', '模糊', '槟榔', - '棺材', '棒槌', '棉花', '核桃', '栅栏', '柴火', '架势', '枕头', '枇杷', '机灵', '本事', - '木头', '木匠', '朋友', '月饼', '月亮', '暖和', '明白', '时候', '新鲜', '故事', '收拾', - '收成', '提防', '挖苦', '挑剔', '指甲', '指头', '拾掇', '拳头', '拨弄', '招牌', '招呼', - '抬举', '护士', '折腾', '扫帚', '打量', '打算', '打点', '打扮', '打听', '打发', '扎实', - '扁担', '戒指', '懒得', '意识', '意思', '情形', '悟性', '怪物', '思量', '怎么', '念头', - '念叨', '快活', '忙活', '志气', '心思', '得罪', '张罗', '弟兄', '开通', '应酬', '庄稼', - '干事', '帮手', '帐篷', '希罕', '师父', '师傅', '巴结', '巴掌', '差事', '工夫', '岁数', - '屁股', '尾巴', '少爷', '小气', '小伙', '将就', '对头', '对付', '寡妇', '家伙', '客气', - '实在', '官司', '学问', '学生', '字号', '嫁妆', '媳妇', '媒人', '婆家', '娘家', '委屈', - '姑娘', '姐夫', '妯娌', '妥当', '妖精', '奴才', '女婿', '头发', '太阳', '大爷', '大方', - '大意', '大夫', '多少', '多么', '外甥', '壮实', '地道', '地方', '在乎', '困难', '嘴巴', - '嘱咐', '嘟囔', '嘀咕', '喜欢', '喇嘛', '喇叭', '商量', '唾沫', '哑巴', '哈欠', '哆嗦', - '咳嗽', '和尚', '告诉', '告示', '含糊', '吓唬', '后头', '名字', '名堂', '合同', '吆喝', - '叫唤', '口袋', '厚道', '厉害', '千斤', '包袱', '包涵', '匀称', '勤快', '动静', '动弹', - '功夫', '力气', '前头', '刺猬', '刺激', '别扭', '利落', '利索', '利害', '分析', '出息', - '凑合', '凉快', '冷战', '冤枉', '冒失', '养活', '关系', '先生', '兄弟', '便宜', '使唤', - '佩服', '作坊', '体面', '位置', '似的', '伙计', '休息', '什么', '人家', '亲戚', '亲家', - '交情', '云彩', '事情', '买卖', '主意', '丫头', '丧气', '两口', '东西', '东家', '世故', - '不由', '不在', '下水', '下巴', '上头', '上司', '丈夫', '丈人', '一辈', '那个', '菩萨', - '父亲', '母亲', '咕噜', '邋遢', '费用', '冤家', '甜头', '介绍', '荒唐', '大人', '泥鳅', - '幸福', '熟悉', '计划', '扑腾', '蜡烛', '姥爷', '照顾', '喉咙', '吉他', '弄堂', '蚂蚱', - '凤凰', '拖沓', '寒碜', '糟蹋', '倒腾', '报复', '逻辑', '盘缠', '喽啰', '牢骚', '咖喱', - '扫把', '惦记' - } - self.must_not_neural_tone_words = { - "男子", "女子", "分子", "原子", "量子", "莲子", "石子", "瓜子", "电子", "人人", "虎虎" - } - self.punc = ":,;。?!“”‘’':,;.?!" - - # the meaning of jieba pos tag: https://blog.csdn.net/weixin_44174352/article/details/113731041 - # e.g. - # word: "家里" - # pos: "s" - # finals: ['ia1', 'i3'] - def _neural_sandhi(self, word: str, pos: str, - finals: List[str]) -> List[str]: - - # reduplication words for n. and v. e.g. 奶奶, 试试, 旺旺 - for j, item in enumerate(word): - if j - 1 >= 0 and item == word[j - 1] and pos[0] in { - "n", "v", "a" - } and word not in self.must_not_neural_tone_words: - finals[j] = finals[j][:-1] + "5" - ge_idx = word.find("个") - if len(word) >= 1 and word[-1] in "吧呢啊呐噻嘛吖嗨呐哦哒额滴哩哟喽啰耶喔诶": - finals[-1] = finals[-1][:-1] + "5" - elif len(word) >= 1 and word[-1] in "的地得": - finals[-1] = finals[-1][:-1] + "5" - # e.g. 走了, 看着, 去过 - # elif len(word) == 1 and word in "了着过" and pos in {"ul", "uz", "ug"}: - # finals[-1] = finals[-1][:-1] + "5" - elif len(word) > 1 and word[-1] in "们子" and pos in { - "r", "n" - } and word not in self.must_not_neural_tone_words: - finals[-1] = finals[-1][:-1] + "5" - # e.g. 桌上, 地下, 家里 - elif len(word) > 1 and word[-1] in "上下里" and pos in {"s", "l", "f"}: - finals[-1] = finals[-1][:-1] + "5" - # e.g. 上来, 下去 - elif len(word) > 1 and word[-1] in "来去" and word[-2] in "上下进出回过起开": - finals[-1] = finals[-1][:-1] + "5" - # 个做量词 - elif (ge_idx >= 1 and - (word[ge_idx - 1].isnumeric() or - word[ge_idx - 1] in "几有两半多各整每做是")) or word == '个': - finals[ge_idx] = finals[ge_idx][:-1] + "5" - else: - if word in self.must_neural_tone_words or word[ - -2:] in self.must_neural_tone_words: - finals[-1] = finals[-1][:-1] + "5" - - word_list = self._split_word(word) - finals_list = [finals[:len(word_list[0])], finals[len(word_list[0]):]] - for i, word in enumerate(word_list): - # conventional neural in Chinese - if word in self.must_neural_tone_words or word[ - -2:] in self.must_neural_tone_words: - finals_list[i][-1] = finals_list[i][-1][:-1] + "5" - finals = sum(finals_list, []) - return finals - - def _bu_sandhi(self, word: str, finals: List[str]) -> List[str]: - # e.g. 看不懂 - if len(word) == 3 and word[1] == "不": - finals[1] = finals[1][:-1] + "5" - else: - for i, char in enumerate(word): - # "不" before tone4 should be bu2, e.g. 不怕 - if char == "不" and i + 1 < len(word) and finals[i + - 1][-1] == "4": - finals[i] = finals[i][:-1] + "2" - return finals - - def _yi_sandhi(self, word: str, finals: List[str]) -> List[str]: - # "一" in number sequences, e.g. 一零零, 二一零 - if word.find("一") != -1 and all( - [item.isnumeric() for item in word if item != "一"]): - return finals - # "一" between reduplication words shold be yi5, e.g. 看一看 - elif len(word) == 3 and word[1] == "一" and word[0] == word[-1]: - finals[1] = finals[1][:-1] + "5" - # when "一" is ordinal word, it should be yi1 - elif word.startswith("第一"): - finals[1] = finals[1][:-1] + "1" - else: - for i, char in enumerate(word): - if char == "一" and i + 1 < len(word): - # "一" before tone4 should be yi2, e.g. 一段 - if finals[i + 1][-1] == "4": - finals[i] = finals[i][:-1] + "2" - # "一" before non-tone4 should be yi4, e.g. 一天 - else: - # "一" 后面如果是标点,还读一声 - if word[i + 1] not in self.punc: - finals[i] = finals[i][:-1] + "4" - return finals - - def _split_word(self, word: str) -> List[str]: - word_list = jieba.cut_for_search(word) - word_list = sorted(word_list, key=lambda i: len(i), reverse=False) - first_subword = word_list[0] - first_begin_idx = word.find(first_subword) - if first_begin_idx == 0: - second_subword = word[len(first_subword):] - new_word_list = [first_subword, second_subword] - else: - second_subword = word[:-len(first_subword)] - new_word_list = [second_subword, first_subword] - return new_word_list - - def _three_sandhi(self, word: str, finals: List[str]) -> List[str]: - if len(word) == 2 and self._all_tone_three(finals): - finals[0] = finals[0][:-1] + "2" - elif len(word) == 3: - word_list = self._split_word(word) - if self._all_tone_three(finals): - # disyllabic + monosyllabic, e.g. 蒙古/包 - if len(word_list[0]) == 2: - finals[0] = finals[0][:-1] + "2" - finals[1] = finals[1][:-1] + "2" - # monosyllabic + disyllabic, e.g. 纸/老虎 - elif len(word_list[0]) == 1: - finals[1] = finals[1][:-1] + "2" - else: - finals_list = [ - finals[:len(word_list[0])], finals[len(word_list[0]):] - ] - if len(finals_list) == 2: - for i, sub in enumerate(finals_list): - # e.g. 所有/人 - if self._all_tone_three(sub) and len(sub) == 2: - finals_list[i][0] = finals_list[i][0][:-1] + "2" - # e.g. 好/喜欢 - elif i == 1 and not self._all_tone_three(sub) and finals_list[i][0][-1] == "3" and \ - finals_list[0][-1][-1] == "3": - - finals_list[0][-1] = finals_list[0][-1][:-1] + "2" - finals = sum(finals_list, []) - # split idiom into two words who's length is 2 - elif len(word) == 4: - finals_list = [finals[:2], finals[2:]] - finals = [] - for sub in finals_list: - if self._all_tone_three(sub): - sub[0] = sub[0][:-1] + "2" - finals += sub - - return finals - - def _all_tone_three(self, finals: List[str]) -> bool: - return all(x[-1] == "3" for x in finals) - - # merge "不" and the word behind it - # if don't merge, "不" sometimes appears alone according to jieba, which may occur sandhi error - def _merge_bu(self, seg: List[Tuple[str, str]]) -> List[Tuple[str, str]]: - new_seg = [] - last_word = "" - for word, pos in seg: - if last_word == "不": - word = last_word + word - if word != "不": - new_seg.append((word, pos)) - last_word = word[:] - if last_word == "不": - new_seg.append((last_word, 'd')) - last_word = "" - return new_seg - - # function 1: merge "一" and reduplication words in it's left and right, e.g. "听","一","听" ->"听一听" - # function 2: merge single "一" and the word behind it - # if don't merge, "一" sometimes appears alone according to jieba, which may occur sandhi error - # e.g. - # input seg: [('听', 'v'), ('一', 'm'), ('听', 'v')] - # output seg: [['听一听', 'v']] - def _merge_yi(self, seg: List[Tuple[str, str]]) -> List[Tuple[str, str]]: - new_seg = [] - # function 1 - for i, (word, pos) in enumerate(seg): - if i - 1 >= 0 and word == "一" and i + 1 < len(seg) and seg[i - 1][ - 0] == seg[i + 1][0] and seg[i - 1][1] == "v": - new_seg[i - 1][0] = new_seg[i - 1][0] + "一" + new_seg[i - 1][0] - else: - if i - 2 >= 0 and seg[i - 1][0] == "一" and seg[i - 2][ - 0] == word and pos == "v": - continue - else: - new_seg.append([word, pos]) - seg = new_seg - new_seg = [] - # function 2 - for i, (word, pos) in enumerate(seg): - if new_seg and new_seg[-1][0] == "一": - new_seg[-1][0] = new_seg[-1][0] + word - else: - new_seg.append([word, pos]) - return new_seg - - # the first and the second words are all_tone_three - def _merge_continuous_three_tones( - self, seg: List[Tuple[str, str]]) -> List[Tuple[str, str]]: - new_seg = [] - sub_finals_list = [ - lazy_pinyin( - word, neutral_tone_with_five=True, style=Style.FINALS_TONE3) - for (word, pos) in seg - ] - assert len(sub_finals_list) == len(seg) - merge_last = [False] * len(seg) - for i, (word, pos) in enumerate(seg): - if i - 1 >= 0 and self._all_tone_three( - sub_finals_list[i - 1]) and self._all_tone_three( - sub_finals_list[i]) and not merge_last[i - 1]: - # if the last word is reduplication, not merge, because reduplication need to be _neural_sandhi - if not self._is_reduplication(seg[i - 1][0]) and len( - seg[i - 1][0]) + len(seg[i][0]) <= 3: - new_seg[-1][0] = new_seg[-1][0] + seg[i][0] - merge_last[i] = True - else: - new_seg.append([word, pos]) - else: - new_seg.append([word, pos]) - - return new_seg - - def _is_reduplication(self, word: str) -> bool: - return len(word) == 2 and word[0] == word[1] - - # the last char of first word and the first char of second word is tone_three - def _merge_continuous_three_tones_2( - self, seg: List[Tuple[str, str]]) -> List[Tuple[str, str]]: - new_seg = [] - sub_finals_list = [ - lazy_pinyin( - word, neutral_tone_with_five=True, style=Style.FINALS_TONE3) - for (word, pos) in seg - ] - assert len(sub_finals_list) == len(seg) - merge_last = [False] * len(seg) - for i, (word, pos) in enumerate(seg): - if i - 1 >= 0 and sub_finals_list[i - 1][-1][-1] == "3" and sub_finals_list[i][0][-1] == "3" and not \ - merge_last[i - 1]: - # if the last word is reduplication, not merge, because reduplication need to be _neural_sandhi - if not self._is_reduplication(seg[i - 1][0]) and len( - seg[i - 1][0]) + len(seg[i][0]) <= 3: - new_seg[-1][0] = new_seg[-1][0] + seg[i][0] - merge_last[i] = True - else: - new_seg.append([word, pos]) - else: - new_seg.append([word, pos]) - return new_seg - - def _merge_er(self, seg: List[Tuple[str, str]]) -> List[Tuple[str, str]]: - new_seg = [] - for i, (word, pos) in enumerate(seg): - if i - 1 >= 0 and word == "儿" and seg[i-1][0] != "#": - new_seg[-1][0] = new_seg[-1][0] + seg[i][0] - else: - new_seg.append([word, pos]) - return new_seg - - def _merge_reduplication( - self, seg: List[Tuple[str, str]]) -> List[Tuple[str, str]]: - new_seg = [] - for i, (word, pos) in enumerate(seg): - if new_seg and word == new_seg[-1][0]: - new_seg[-1][0] = new_seg[-1][0] + seg[i][0] - else: - new_seg.append([word, pos]) - return new_seg - - def pre_merge_for_modify( - self, seg: List[Tuple[str, str]]) -> List[Tuple[str, str]]: - seg = self._merge_bu(seg) - try: - seg = self._merge_yi(seg) - except: - print("_merge_yi failed") - seg = self._merge_reduplication(seg) - seg = self._merge_continuous_three_tones(seg) - seg = self._merge_continuous_three_tones_2(seg) - seg = self._merge_er(seg) - return seg - - def modified_tone(self, word: str, pos: str, - finals: List[str]) -> List[str]: - finals = self._bu_sandhi(word, finals) - finals = self._yi_sandhi(word, finals) - finals = self._neural_sandhi(word, pos, finals) - finals = self._three_sandhi(word, finals) - return finals diff --git a/spaces/YUANAI/DiffspeechResearch/tasks/tts/vocoder_infer/__init__.py b/spaces/YUANAI/DiffspeechResearch/tasks/tts/vocoder_infer/__init__.py deleted file mode 100644 index 80a76af7bba1ce3d3259d4bbb850875f0eafba8e..0000000000000000000000000000000000000000 --- a/spaces/YUANAI/DiffspeechResearch/tasks/tts/vocoder_infer/__init__.py +++ /dev/null @@ -1 +0,0 @@ -from . import hifigan \ No newline at end of file diff --git a/spaces/YazawaSunrise/so-vits-svc-LoveLive/README.md b/spaces/YazawaSunrise/so-vits-svc-LoveLive/README.md deleted file mode 100644 index 533d46885a0f2f274f9f3b1d5fd2800de44fc792..0000000000000000000000000000000000000000 --- a/spaces/YazawaSunrise/so-vits-svc-LoveLive/README.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -license: cc-by-nc-3.0 -title: LoveLive-so-vits-svc -sdk: gradio -sdk_version: 3.4.1 -emoji: 🚀 -colorFrom: yellow -colorTo: red -pinned: false -app_file: app.py ---- \ No newline at end of file diff --git a/spaces/Yunoposter/H377/Dockerfile b/spaces/Yunoposter/H377/Dockerfile deleted file mode 100644 index f01206be382aaddb63e6961068b0d67431f531e3..0000000000000000000000000000000000000000 --- a/spaces/Yunoposter/H377/Dockerfile +++ /dev/null @@ -1,14 +0,0 @@ -FROM node:18-bullseye-slim -RUN apt-get update && \ - apt-get install -y git -RUN git clone https://gitgud.io/khanon/oai-reverse-proxy.git /app -WORKDIR /app -# reverting janitor.ai block -RUN git revert -n 43359779e73f2e2dea97a24c28c33f81d725cc61 -RUN git revert -n c0ac69df2766a260ecb49a2730ac724db391f023 -RUN npm install -COPY Dockerfile greeting.md* .env* ./ -RUN npm run build -EXPOSE 7860 -ENV NODE_ENV=production -CMD [ "npm", "start" ] diff --git a/spaces/Zeltoria/anime-voice-generator/attentions.py b/spaces/Zeltoria/anime-voice-generator/attentions.py deleted file mode 100644 index 86bc73b5fe98cc7b443e9078553920346c996707..0000000000000000000000000000000000000000 --- a/spaces/Zeltoria/anime-voice-generator/attentions.py +++ /dev/null @@ -1,300 +0,0 @@ -import math -import torch -from torch import nn -from torch.nn import functional as F - -import commons -from modules import LayerNorm - - -class Encoder(nn.Module): - def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., window_size=4, **kwargs): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.window_size = window_size - - self.drop = nn.Dropout(p_dropout) - self.attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, window_size=window_size)) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout)) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask): - attn_mask = x_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.attn_layers[i](x, x, attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class Decoder(nn.Module): - def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., proximal_bias=False, proximal_init=True, **kwargs): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - - self.drop = nn.Dropout(p_dropout) - self.self_attn_layers = nn.ModuleList() - self.norm_layers_0 = nn.ModuleList() - self.encdec_attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.self_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, proximal_bias=proximal_bias, proximal_init=proximal_init)) - self.norm_layers_0.append(LayerNorm(hidden_channels)) - self.encdec_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout)) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout, causal=True)) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask, h, h_mask): - """ - x: decoder input - h: encoder output - """ - self_attn_mask = commons.subsequent_mask(x_mask.size(2)).to(device=x.device, dtype=x.dtype) - encdec_attn_mask = h_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.self_attn_layers[i](x, x, self_attn_mask) - y = self.drop(y) - x = self.norm_layers_0[i](x + y) - - y = self.encdec_attn_layers[i](x, h, encdec_attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class MultiHeadAttention(nn.Module): - def __init__(self, channels, out_channels, n_heads, p_dropout=0., window_size=None, heads_share=True, block_length=None, proximal_bias=False, proximal_init=False): - super().__init__() - assert channels % n_heads == 0 - - self.channels = channels - self.out_channels = out_channels - self.n_heads = n_heads - self.p_dropout = p_dropout - self.window_size = window_size - self.heads_share = heads_share - self.block_length = block_length - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - self.attn = None - - self.k_channels = channels // n_heads - self.conv_q = nn.Conv1d(channels, channels, 1) - self.conv_k = nn.Conv1d(channels, channels, 1) - self.conv_v = nn.Conv1d(channels, channels, 1) - self.conv_o = nn.Conv1d(channels, out_channels, 1) - self.drop = nn.Dropout(p_dropout) - - if window_size is not None: - n_heads_rel = 1 if heads_share else n_heads - rel_stddev = self.k_channels**-0.5 - self.emb_rel_k = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev) - self.emb_rel_v = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev) - - nn.init.xavier_uniform_(self.conv_q.weight) - nn.init.xavier_uniform_(self.conv_k.weight) - nn.init.xavier_uniform_(self.conv_v.weight) - if proximal_init: - with torch.no_grad(): - self.conv_k.weight.copy_(self.conv_q.weight) - self.conv_k.bias.copy_(self.conv_q.bias) - - def forward(self, x, c, attn_mask=None): - q = self.conv_q(x) - k = self.conv_k(c) - v = self.conv_v(c) - - x, self.attn = self.attention(q, k, v, mask=attn_mask) - - x = self.conv_o(x) - return x - - def attention(self, query, key, value, mask=None): - # reshape [b, d, t] -> [b, n_h, t, d_k] - b, d, t_s, t_t = (*key.size(), query.size(2)) - query = query.view(b, self.n_heads, self.k_channels, t_t).transpose(2, 3) - key = key.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - value = value.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - - scores = torch.matmul(query / math.sqrt(self.k_channels), key.transpose(-2, -1)) - if self.window_size is not None: - assert t_s == t_t, "Relative attention is only available for self-attention." - key_relative_embeddings = self._get_relative_embeddings(self.emb_rel_k, t_s) - rel_logits = self._matmul_with_relative_keys(query /math.sqrt(self.k_channels), key_relative_embeddings) - scores_local = self._relative_position_to_absolute_position(rel_logits) - scores = scores + scores_local - if self.proximal_bias: - assert t_s == t_t, "Proximal bias is only available for self-attention." - scores = scores + self._attention_bias_proximal(t_s).to(device=scores.device, dtype=scores.dtype) - if mask is not None: - scores = scores.masked_fill(mask == 0, -1e4) - if self.block_length is not None: - assert t_s == t_t, "Local attention is only available for self-attention." - block_mask = torch.ones_like(scores).triu(-self.block_length).tril(self.block_length) - scores = scores.masked_fill(block_mask == 0, -1e4) - p_attn = F.softmax(scores, dim=-1) # [b, n_h, t_t, t_s] - p_attn = self.drop(p_attn) - output = torch.matmul(p_attn, value) - if self.window_size is not None: - relative_weights = self._absolute_position_to_relative_position(p_attn) - value_relative_embeddings = self._get_relative_embeddings(self.emb_rel_v, t_s) - output = output + self._matmul_with_relative_values(relative_weights, value_relative_embeddings) - output = output.transpose(2, 3).contiguous().view(b, d, t_t) # [b, n_h, t_t, d_k] -> [b, d, t_t] - return output, p_attn - - def _matmul_with_relative_values(self, x, y): - """ - x: [b, h, l, m] - y: [h or 1, m, d] - ret: [b, h, l, d] - """ - ret = torch.matmul(x, y.unsqueeze(0)) - return ret - - def _matmul_with_relative_keys(self, x, y): - """ - x: [b, h, l, d] - y: [h or 1, m, d] - ret: [b, h, l, m] - """ - ret = torch.matmul(x, y.unsqueeze(0).transpose(-2, -1)) - return ret - - def _get_relative_embeddings(self, relative_embeddings, length): - max_relative_position = 2 * self.window_size + 1 - # Pad first before slice to avoid using cond ops. - pad_length = max(length - (self.window_size + 1), 0) - slice_start_position = max((self.window_size + 1) - length, 0) - slice_end_position = slice_start_position + 2 * length - 1 - if pad_length > 0: - padded_relative_embeddings = F.pad( - relative_embeddings, - commons.convert_pad_shape([[0, 0], [pad_length, pad_length], [0, 0]])) - else: - padded_relative_embeddings = relative_embeddings - used_relative_embeddings = padded_relative_embeddings[:,slice_start_position:slice_end_position] - return used_relative_embeddings - - def _relative_position_to_absolute_position(self, x): - """ - x: [b, h, l, 2*l-1] - ret: [b, h, l, l] - """ - batch, heads, length, _ = x.size() - # Concat columns of pad to shift from relative to absolute indexing. - x = F.pad(x, commons.convert_pad_shape([[0,0],[0,0],[0,0],[0,1]])) - - # Concat extra elements so to add up to shape (len+1, 2*len-1). - x_flat = x.view([batch, heads, length * 2 * length]) - x_flat = F.pad(x_flat, commons.convert_pad_shape([[0,0],[0,0],[0,length-1]])) - - # Reshape and slice out the padded elements. - x_final = x_flat.view([batch, heads, length+1, 2*length-1])[:, :, :length, length-1:] - return x_final - - def _absolute_position_to_relative_position(self, x): - """ - x: [b, h, l, l] - ret: [b, h, l, 2*l-1] - """ - batch, heads, length, _ = x.size() - # padd along column - x = F.pad(x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, length-1]])) - x_flat = x.view([batch, heads, length**2 + length*(length -1)]) - # add 0's in the beginning that will skew the elements after reshape - x_flat = F.pad(x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [length, 0]])) - x_final = x_flat.view([batch, heads, length, 2*length])[:,:,:,1:] - return x_final - - def _attention_bias_proximal(self, length): - """Bias for self-attention to encourage attention to close positions. - Args: - length: an integer scalar. - Returns: - a Tensor with shape [1, 1, length, length] - """ - r = torch.arange(length, dtype=torch.float32) - diff = torch.unsqueeze(r, 0) - torch.unsqueeze(r, 1) - return torch.unsqueeze(torch.unsqueeze(-torch.log1p(torch.abs(diff)), 0), 0) - - -class FFN(nn.Module): - def __init__(self, in_channels, out_channels, filter_channels, kernel_size, p_dropout=0., activation=None, causal=False): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.activation = activation - self.causal = causal - - if causal: - self.padding = self._causal_padding - else: - self.padding = self._same_padding - - self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size) - self.conv_2 = nn.Conv1d(filter_channels, out_channels, kernel_size) - self.drop = nn.Dropout(p_dropout) - - def forward(self, x, x_mask): - x = self.conv_1(self.padding(x * x_mask)) - if self.activation == "gelu": - x = x * torch.sigmoid(1.702 * x) - else: - x = torch.relu(x) - x = self.drop(x) - x = self.conv_2(self.padding(x * x_mask)) - return x * x_mask - - def _causal_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = self.kernel_size - 1 - pad_r = 0 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x - - def _same_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = (self.kernel_size - 1) // 2 - pad_r = self.kernel_size // 2 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x diff --git a/spaces/a-v-bely/russian-task-generator/utilities_cookies/build/static/js/2.422ca0c4.chunk.js b/spaces/a-v-bely/russian-task-generator/utilities_cookies/build/static/js/2.422ca0c4.chunk.js deleted file mode 100644 index 1fd17f11d35fc387466e4141d5ff5ba07823b5ae..0000000000000000000000000000000000000000 --- a/spaces/a-v-bely/russian-task-generator/utilities_cookies/build/static/js/2.422ca0c4.chunk.js +++ /dev/null @@ -1,3 +0,0 @@ -/*! For license information please see 2.422ca0c4.chunk.js.LICENSE.txt */ -(this.webpackJsonpstreamlit_cookie_manager=this.webpackJsonpstreamlit_cookie_manager||[]).push([[2],[function(t,e,n){t.exports=n(10)},function(t,e,n){"use strict";t.exports=n(8)},function(t,e,n){"use strict";n.d(e,"a",(function(){return xf}));var r={};n.r(r),n.d(r,"memcpy",(function(){return Yt})),n.d(r,"joinUint8Arrays",(function(){return Wt})),n.d(r,"toArrayBufferView",(function(){return Ht})),n.d(r,"toInt8Array",(function(){return $t})),n.d(r,"toInt16Array",(function(){return Kt})),n.d(r,"toInt32Array",(function(){return Gt})),n.d(r,"toBigInt64Array",(function(){return qt})),n.d(r,"toUint8Array",(function(){return Jt})),n.d(r,"toUint16Array",(function(){return Zt})),n.d(r,"toUint32Array",(function(){return Qt})),n.d(r,"toBigUint64Array",(function(){return Xt})),n.d(r,"toFloat32Array",(function(){return te})),n.d(r,"toFloat64Array",(function(){return ee})),n.d(r,"toUint8ClampedArray",(function(){return ne})),n.d(r,"toArrayBufferViewIterator",(function(){return ie})),n.d(r,"toInt8ArrayIterator",(function(){return ae})),n.d(r,"toInt16ArrayIterator",(function(){return oe})),n.d(r,"toInt32ArrayIterator",(function(){return ue})),n.d(r,"toUint8ArrayIterator",(function(){return se})),n.d(r,"toUint16ArrayIterator",(function(){return ce})),n.d(r,"toUint32ArrayIterator",(function(){return fe})),n.d(r,"toFloat32ArrayIterator",(function(){return le})),n.d(r,"toFloat64ArrayIterator",(function(){return he})),n.d(r,"toUint8ClampedArrayIterator",(function(){return ye})),n.d(r,"toArrayBufferViewAsyncIterator",(function(){return pe})),n.d(r,"toInt8ArrayAsyncIterator",(function(){return ve})),n.d(r,"toInt16ArrayAsyncIterator",(function(){return be})),n.d(r,"toInt32ArrayAsyncIterator",(function(){return ge})),n.d(r,"toUint8ArrayAsyncIterator",(function(){return me})),n.d(r,"toUint16ArrayAsyncIterator",(function(){return ke})),n.d(r,"toUint32ArrayAsyncIterator",(function(){return we})),n.d(r,"toFloat32ArrayAsyncIterator",(function(){return _e})),n.d(r,"toFloat64ArrayAsyncIterator",(function(){return Ie})),n.d(r,"toUint8ClampedArrayAsyncIterator",(function(){return Se})),n.d(r,"rebaseValueOffsets",(function(){return xe})),n.d(r,"compareArrayLike",(function(){return Ae}));var i={};n.r(i),n.d(i,"getBool",(function(){return un})),n.d(i,"getBit",(function(){return sn})),n.d(i,"setBool",(function(){return cn})),n.d(i,"truncateBitmap",(function(){return fn})),n.d(i,"packBools",(function(){return ln})),n.d(i,"iterateBits",(function(){return hn})),n.d(i,"popcnt_bit_range",(function(){return yn})),n.d(i,"popcnt_array",(function(){return pn})),n.d(i,"popcnt_uint32",(function(){return dn}));var a={};n.r(a),n.d(a,"uint16ToFloat64",(function(){return Nr})),n.d(a,"float64ToUint16",(function(){return Cr}));var o={};n.r(o),n.d(o,"isArrowBigNumSymbol",(function(){return Hr})),n.d(o,"bignumToString",(function(){return Yr})),n.d(o,"bignumToBigInt",(function(){return Wr})),n.d(o,"BN",(function(){return Xr}));var u={};n.r(u),n.d(u,"clampIndex",(function(){return Ci})),n.d(u,"clampRange",(function(){return Vi})),n.d(u,"createElementComparator",(function(){return Pi}));var s={};n.r(s),n.d(s,"BaseInt64",(function(){return ao})),n.d(s,"Uint64",(function(){return oo})),n.d(s,"Int64",(function(){return uo})),n.d(s,"Int128",(function(){return so}));n(3);var c=n(1),f=n.n(c),l=new WeakMap,h=new WeakMap;function y(t){var e=l.get(t);return console.assert(null!=e,"'this' is expected an Event object, but got",t),e}function p(t){null==t.passiveListener?t.event.cancelable&&(t.canceled=!0,"function"===typeof t.event.preventDefault&&t.event.preventDefault()):"undefined"!==typeof console&&"function"===typeof console.error&&console.error("Unable to preventDefault inside passive event listener invocation.",t.passiveListener)}function d(t,e){l.set(this,{eventTarget:t,event:e,eventPhase:2,currentTarget:t,canceled:!1,stopped:!1,immediateStopped:!1,passiveListener:null,timeStamp:e.timeStamp||Date.now()}),Object.defineProperty(this,"isTrusted",{value:!1,enumerable:!0});for(var n=Object.keys(e),r=0;r0){for(var t=new Array(arguments.length),e=0;et.length)&&(e=t.length);for(var n=0,r=new Array(e);n=t.length?{done:!0}:{done:!1,value:t[r++]}},e:function(t){throw t},f:i}}throw new TypeError("Invalid attempt to iterate non-iterable instance.\nIn order to be iterable, non-array objects must have a [Symbol.iterator]() method.")}var a,o=!0,u=!1;return{s:function(){n=n.call(t)},n:function(){var t=n.next();return o=t.done,t},e:function(t){u=!0,a=t},f:function(){try{o||null==n.return||n.return()}finally{if(u)throw a}}}}function D(t,e,n,r,i,a,o){try{var u=t[a](o),s=u.value}catch(c){return void n(c)}u.done?e(s):Promise.resolve(s).then(r,i)}function L(t){return function(){var e=this,n=arguments;return new Promise((function(r,i){var a=t.apply(e,n);function o(t){D(a,r,i,o,u,"next",t)}function u(t){D(a,r,i,o,u,"throw",t)}o(void 0)}))}}function F(t,e){if(!(t instanceof e))throw new TypeError("Cannot call a class as a function")}function M(t,e){for(var n=0;n>>0)+4294967296*this.high},W.Long.prototype.equals=function(t){return this.low==t.low&&this.high==t.high},W.Long.ZERO=new W.Long(0,0),W.Builder=function(t){if(t)e=t;else var e=1024;this.bb=W.ByteBuffer.allocate(e),this.space=e,this.minalign=1,this.vtable=null,this.vtable_in_use=0,this.isNested=!1,this.object_start=0,this.vtables=[],this.vector_num_elems=0,this.force_defaults=!1},W.Builder.prototype.clear=function(){this.bb.clear(),this.space=this.bb.capacity(),this.minalign=1,this.vtable=null,this.vtable_in_use=0,this.isNested=!1,this.object_start=0,this.vtables=[],this.vector_num_elems=0,this.force_defaults=!1},W.Builder.prototype.forceDefaults=function(t){this.force_defaults=t},W.Builder.prototype.dataBuffer=function(){return this.bb},W.Builder.prototype.asUint8Array=function(){return this.bb.bytes().subarray(this.bb.position(),this.bb.position()+this.offset())},W.Builder.prototype.prep=function(t,e){t>this.minalign&&(this.minalign=t);for(var n=1+~(this.bb.capacity()-this.space+e)&t-1;this.space=0&&0==this.vtable[e];e--);for(var n=e+1;e>=0;e--)this.addInt16(0!=this.vtable[e]?t-this.vtable[e]:0);this.addInt16(t-this.object_start);var r=(n+2)*W.SIZEOF_SHORT;this.addInt16(r);var i=0,a=this.space;t:for(e=0;e=0;r--)this.writeInt8(n.charCodeAt(r))}this.prep(this.minalign,W.SIZEOF_INT),this.addOffset(t),this.bb.setPosition(this.space)},W.Builder.prototype.requiredField=function(t,e){var n=this.bb.capacity()-t,r=n-this.bb.readInt32(n);if(!(0!=this.bb.readInt16(r+e)))throw new Error("FlatBuffers: field "+e+" must be set")},W.Builder.prototype.startVector=function(t,e,n){this.notNested(),this.vector_num_elems=e,this.prep(W.SIZEOF_INT,t*e),this.prep(n,t*e)},W.Builder.prototype.endVector=function(){return this.writeInt32(this.vector_num_elems),this.offset()},W.Builder.prototype.createString=function(t){if(t instanceof Uint8Array)var e=t;else{e=[];for(var n=0;n=56320)r=i;else r=(i<<10)+t.charCodeAt(n++)+-56613888;r<128?e.push(r):(r<2048?e.push(r>>6&31|192):(r<65536?e.push(r>>12&15|224):e.push(r>>18&7|240,r>>12&63|128),e.push(r>>6&63|128)),e.push(63&r|128))}}this.addInt8(0),this.startVector(1,e.length,1),this.bb.setPosition(this.space-=e.length);n=0;for(var a=this.space,o=this.bb.bytes();n>24},W.ByteBuffer.prototype.readUint8=function(t){return this.bytes_[t]},W.ByteBuffer.prototype.readInt16=function(t){return this.readUint16(t)<<16>>16},W.ByteBuffer.prototype.readUint16=function(t){return this.bytes_[t]|this.bytes_[t+1]<<8},W.ByteBuffer.prototype.readInt32=function(t){return this.bytes_[t]|this.bytes_[t+1]<<8|this.bytes_[t+2]<<16|this.bytes_[t+3]<<24},W.ByteBuffer.prototype.readUint32=function(t){return this.readInt32(t)>>>0},W.ByteBuffer.prototype.readInt64=function(t){return new W.Long(this.readInt32(t),this.readInt32(t+4))},W.ByteBuffer.prototype.readUint64=function(t){return new W.Long(this.readUint32(t),this.readUint32(t+4))},W.ByteBuffer.prototype.readFloat32=function(t){return W.int32[0]=this.readInt32(t),W.float32[0]},W.ByteBuffer.prototype.readFloat64=function(t){return W.int32[W.isLittleEndian?0:1]=this.readInt32(t),W.int32[W.isLittleEndian?1:0]=this.readInt32(t+4),W.float64[0]},W.ByteBuffer.prototype.writeInt8=function(t,e){this.bytes_[t]=e},W.ByteBuffer.prototype.writeUint8=function(t,e){this.bytes_[t]=e},W.ByteBuffer.prototype.writeInt16=function(t,e){this.bytes_[t]=e,this.bytes_[t+1]=e>>8},W.ByteBuffer.prototype.writeUint16=function(t,e){this.bytes_[t]=e,this.bytes_[t+1]=e>>8},W.ByteBuffer.prototype.writeInt32=function(t,e){this.bytes_[t]=e,this.bytes_[t+1]=e>>8,this.bytes_[t+2]=e>>16,this.bytes_[t+3]=e>>24},W.ByteBuffer.prototype.writeUint32=function(t,e){this.bytes_[t]=e,this.bytes_[t+1]=e>>8,this.bytes_[t+2]=e>>16,this.bytes_[t+3]=e>>24},W.ByteBuffer.prototype.writeInt64=function(t,e){this.writeInt32(t,e.low),this.writeInt32(t+4,e.high)},W.ByteBuffer.prototype.writeUint64=function(t,e){this.writeUint32(t,e.low),this.writeUint32(t+4,e.high)},W.ByteBuffer.prototype.writeFloat32=function(t,e){W.float32[0]=e,this.writeInt32(t,W.int32[0])},W.ByteBuffer.prototype.writeFloat64=function(t,e){W.float64[0]=e,this.writeInt32(t,W.int32[W.isLittleEndian?0:1]),this.writeInt32(t+4,W.int32[W.isLittleEndian?1:0])},W.ByteBuffer.prototype.getBufferIdentifier=function(){if(this.bytes_.length>10),56320+(1023&a)))}return r},W.ByteBuffer.prototype.__indirect=function(t){return t+this.readInt32(t)},W.ByteBuffer.prototype.__vector=function(t){return t+this.readInt32(t)+W.SIZEOF_INT},W.ByteBuffer.prototype.__vector_len=function(t){return this.readInt32(t+this.readInt32(t))},W.ByteBuffer.prototype.__has_identifier=function(t){if(t.length!=W.FILE_IDENTIFIER_LENGTH)throw new Error("FlatBuffers: file identifier must be length "+W.FILE_IDENTIFIER_LENGTH);for(var e=0;e>6*n)+r];n>0;){var a=e>>6*(n-1);i.push(128|63&a),n-=1}return i}}Z.prototype={decode:function(t,e){var n;n="object"===typeof t&&t instanceof ArrayBuffer?new Uint8Array(t):"object"===typeof t&&"buffer"in t&&t.buffer instanceof ArrayBuffer?new Uint8Array(t.buffer,t.byteOffset,t.byteLength):new Uint8Array(0),e=$(e),this._streaming||(this._decoder=new X({fatal:this._fatal}),this._BOMseen=!1),this._streaming=Boolean(e.stream);for(var r,i=new K(n),a=[];!i.endOfStream()&&(r=this._decoder.handler(i,i.read()))!==G;)null!==r&&(Array.isArray(r)?a.push.apply(a,r):a.push(r));if(!this._streaming){do{if((r=this._decoder.handler(i,i.read()))===G)break;null!==r&&(Array.isArray(r)?a.push.apply(a,r):a.push(r))}while(!i.endOfStream());this._decoder=null}return a.length&&(-1===["utf-8"].indexOf(this.encoding)||this._ignoreBOM||this._BOMseen||(65279===a[0]?(this._BOMseen=!0,a.shift()):this._BOMseen=!0)),function(t){for(var e="",n=0;n>10),56320+(1023&r)))}return e}(a)}},Q.prototype={encode:function(t,e){t=t?String(t):"",e=$(e),this._streaming||(this._encoder=new tt(this._options)),this._streaming=Boolean(e.stream);for(var n,r=[],i=new K(function(t){for(var e=String(t),n=e.length,r=0,i=[];r57343)i.push(a);else if(56320<=a&&a<=57343)i.push(65533);else if(55296<=a&&a<=56319)if(r===n-1)i.push(65533);else{var o=t.charCodeAt(r+1);if(56320<=o&&o<=57343){var u=1023&a,s=1023&o;i.push(65536+(u<<10)+s),r+=1}else i.push(65533)}r+=1}return i}(t));!i.endOfStream()&&(n=this._encoder.handler(i,i.read()))!==G;)Array.isArray(n)?r.push.apply(r,n):r.push(n);if(!this._streaming){for(;(n=this._encoder.handler(i,i.read()))!==G;)Array.isArray(n)?r.push.apply(r,n):r.push(n);this._encoder=null}return new Uint8Array(r)}};var et="function"===typeof Buffer?Buffer:null,nt="function"===typeof TextDecoder&&"function"===typeof TextEncoder,rt=function(t){if(nt||!et){var e=new t("utf-8");return function(t){return e.decode(t)}}return function(t){var e=Jt(t),n=e.buffer,r=e.byteOffset,i=e.length;return et.from(n,r,i).toString()}}("undefined"!==typeof TextDecoder?TextDecoder:Z),it=function(t){if(nt||!et){var e=new t;return function(t){return e.encode(t)}}return function(){var t=arguments.length>0&&void 0!==arguments[0]?arguments[0]:"";return Jt(et.from(t,"utf8"))}}("undefined"!==typeof TextEncoder?TextEncoder:Q);function at(t,e){return at=Object.setPrototypeOf||function(t,e){return t.__proto__=e,t},at(t,e)}function ot(t,e){if("function"!==typeof e&&null!==e)throw new TypeError("Super expression must either be null or a function");Object.defineProperty(t,"prototype",{value:Object.create(e&&e.prototype,{constructor:{value:t,writable:!0,configurable:!0}}),writable:!1}),e&&at(t,e)}function ut(t){return ut=Object.setPrototypeOf?Object.getPrototypeOf:function(t){return t.__proto__||Object.getPrototypeOf(t)},ut(t)}function st(){if("undefined"===typeof Reflect||!Reflect.construct)return!1;if(Reflect.construct.sham)return!1;if("function"===typeof Proxy)return!0;try{return Boolean.prototype.valueOf.call(Reflect.construct(Boolean,[],(function(){}))),!0}catch(t){return!1}}var ct=n(4),ft=n.n(ct);function lt(t){if(void 0===t)throw new ReferenceError("this hasn't been initialised - super() hasn't been called");return t}function ht(t,e){if(e&&("object"===ft()(e)||"function"===typeof e))return e;if(void 0!==e)throw new TypeError("Derived constructors may only return object or undefined");return lt(t)}function yt(t){var e=st();return function(){var n,r=ut(t);if(e){var i=ut(this).constructor;n=Reflect.construct(r,arguments,i)}else n=r.apply(this,arguments);return ht(this,n)}}var pt=Object.freeze({done:!0,value:void 0}),dt=function(){function t(e){F(this,t),this._json=e}return E(t,[{key:"schema",get:function(){return this._json.schema}},{key:"batches",get:function(){return this._json.batches||[]}},{key:"dictionaries",get:function(){return this._json.dictionaries||[]}}]),t}(),vt=function(){function t(){F(this,t)}return E(t,[{key:"tee",value:function(){return this._getDOMStream().tee()}},{key:"pipe",value:function(t,e){return this._getNodeStream().pipe(t,e)}},{key:"pipeTo",value:function(t,e){return this._getDOMStream().pipeTo(t,e)}},{key:"pipeThrough",value:function(t,e){return this._getDOMStream().pipeThrough(t,e)}},{key:"_getDOMStream",value:function(){return this._DOMStream||(this._DOMStream=this.toDOMStream())}},{key:"_getNodeStream",value:function(){return this._nodeStream||(this._nodeStream=this.toNodeStream())}}]),t}(),bt=function(t,e){ot(r,t);var n=yt(r);function r(){var t;return F(this,r),(t=n.call(this))._values=[],t.resolvers=[],t._closedPromise=new Promise((function(e){return t._closedPromiseResolve=e})),t}return E(r,[{key:"closed",get:function(){return this._closedPromise}},{key:"cancel",value:function(){var t=L(R.mark((function t(e){return R.wrap((function(t){for(;;)switch(t.prev=t.next){case 0:return t.next=2,this.return(e);case 2:case"end":return t.stop()}}),t,this)})));return function(e){return t.apply(this,arguments)}}()},{key:"write",value:function(t){this._ensureOpen()&&(this.resolvers.length<=0?this._values.push(t):this.resolvers.shift().resolve({done:!1,value:t}))}},{key:"abort",value:function(t){this._closedPromiseResolve&&(this.resolvers.length<=0?this._error={error:t}:this.resolvers.shift().reject({done:!0,value:t}))}},{key:"close",value:function(){if(this._closedPromiseResolve){for(var t=this.resolvers;t.length>0;)t.shift().resolve(pt);this._closedPromiseResolve(),this._closedPromiseResolve=void 0}}},{key:e,value:function(){return this}},{key:"toDOMStream",value:function(t){return Be.toDOMStream(this._closedPromiseResolve||this._error?this:this._values,t)}},{key:"toNodeStream",value:function(t){return Be.toNodeStream(this._closedPromiseResolve||this._error?this:this._values,t)}},{key:"throw",value:function(){var t=L(R.mark((function t(e){return R.wrap((function(t){for(;;)switch(t.prev=t.next){case 0:return t.next=2,this.abort(e);case 2:return t.abrupt("return",pt);case 3:case"end":return t.stop()}}),t,this)})));return function(e){return t.apply(this,arguments)}}()},{key:"return",value:function(){var t=L(R.mark((function t(e){return R.wrap((function(t){for(;;)switch(t.prev=t.next){case 0:return t.next=2,this.close();case 2:return t.abrupt("return",pt);case 3:case"end":return t.stop()}}),t,this)})));return function(e){return t.apply(this,arguments)}}()},{key:"read",value:function(){var t=L(R.mark((function t(e){return R.wrap((function(t){for(;;)switch(t.prev=t.next){case 0:return t.next=2,this.next(e,"read");case 2:return t.abrupt("return",t.sent.value);case 3:case"end":return t.stop()}}),t,this)})));return function(e){return t.apply(this,arguments)}}()},{key:"peek",value:function(){var t=L(R.mark((function t(e){return R.wrap((function(t){for(;;)switch(t.prev=t.next){case 0:return t.next=2,this.next(e,"peek");case 2:return t.abrupt("return",t.sent.value);case 3:case"end":return t.stop()}}),t,this)})));return function(e){return t.apply(this,arguments)}}()},{key:"next",value:function(){var t=this;return this._values.length>0?Promise.resolve({done:!1,value:this._values.shift()}):this._error?Promise.reject({done:!0,value:this._error.error}):this._closedPromiseResolve?new Promise((function(e,n){t.resolvers.push({resolve:e,reject:n})})):Promise.resolve(pt)}},{key:"_ensureOpen",value:function(){if(this._closedPromiseResolve)return!0;throw new Error("".concat(this," is closed"))}}]),r}(vt,Symbol.asyncIterator),gt=U(function(){var t=function(){throw new Error("BigInt is not available in this environment")};function e(){throw t()}return e.asIntN=function(){throw t()},e.asUintN=function(){throw t()},"undefined"!==typeof BigInt?[BigInt,!0]:[e,!1]}(),2),mt=gt[0],kt=gt[1],wt=U(function(){var t=function(){throw new Error("BigInt64Array is not available in this environment")};return"undefined"!==typeof BigInt64Array?[BigInt64Array,!0]:[function(){function e(){throw F(this,e),t()}return E(e,null,[{key:"BYTES_PER_ELEMENT",get:function(){return 8}},{key:"of",value:function(){throw t()}},{key:"from",value:function(){throw t()}}]),e}(),!1]}(),2),_t=wt[0],It=(wt[1],U(function(){var t=function(){throw new Error("BigUint64Array is not available in this environment")};return"undefined"!==typeof BigUint64Array?[BigUint64Array,!0]:[function(){function e(){throw F(this,e),t()}return E(e,null,[{key:"BYTES_PER_ELEMENT",get:function(){return 8}},{key:"of",value:function(){throw t()}},{key:"from",value:function(){throw t()}}]),e}(),!1]}(),2)),St=It[0],xt=(It[1],function(t){return"number"===typeof t}),At=function(t){return"boolean"===typeof t},Tt=function(t){return"function"===typeof t},Bt=function(t){return null!=t&&Object(t)===t},Ot=function(t){return Bt(t)&&Tt(t.then)},Dt=function(t){return Bt(t)&&Tt(t[Symbol.iterator])},Lt=function(t){return Bt(t)&&Tt(t[Symbol.asyncIterator])},Ft=function(t){return Bt(t)&&Bt(t.schema)},Mt=function(t){return Bt(t)&&"done"in t&&"value"in t},Et=function(t){return Bt(t)&&Tt(t.stat)&&xt(t.fd)},Ut=function(t){return Bt(t)&&Ct(t.body)},Nt=function(t){return Bt(t)&&Tt(t.abort)&&Tt(t.getWriter)&&!(t instanceof vt)},Ct=function(t){return Bt(t)&&Tt(t.cancel)&&Tt(t.getReader)&&!(t instanceof vt)},Vt=function(t){return Bt(t)&&Tt(t.end)&&Tt(t.write)&&At(t.writable)&&!(t instanceof vt)},jt=function(t){return Bt(t)&&Tt(t.read)&&Tt(t.pipe)&&At(t.readable)&&!(t instanceof vt)},Rt=R.mark(ie),Pt=W.ByteBuffer,zt="undefined"!==typeof SharedArrayBuffer?SharedArrayBuffer:ArrayBuffer;function Yt(t,e){var n=arguments.length>2&&void 0!==arguments[2]?arguments[2]:0,r=arguments.length>3&&void 0!==arguments[3]?arguments[3]:e.byteLength,i=t.byteLength,a=new Uint8Array(t.buffer,t.byteOffset,i),o=new Uint8Array(e.buffer,e.byteOffset,Math.min(r,i));return a.set(o,n),t}function Wt(t,e){for(var n,r,i,a=function(t){for(var e,n,r,i,a,o,u=t[0]?[t[0]]:[],s=0,c=0,f=t.length;++s0)do{if(t[n]!==e[n])return!1}while(++n0&&(r.push(i),u+=i.byteLength),!(e||o<=u)){y.next=22;break}case 16:return y.next=18,s();case 18:h=y.sent,a=h.cmd,o=h.size;case 21:if(o0&&(i.push(a),s+=a.byteLength),!(n||u<=s)){t.next=31;break}case 25:return t.next=27,c();case 27:y=t.sent,o=y.cmd,u=y.size;case 30:if(u0&&(i.push(Jt(a)),s+=a.byteLength),!(n||u<=s)){t.next=31;break}case 25:return t.next=27,c();case 27:y=t.sent,o=y.cmd,u=y.size;case 30:if(u=i)){t.next=2;break}return t.abrupt("return",{done:!1,value:new Uint8Array(n,0,i)});case 2:return t.next=4,e.read(new Uint8Array(n,r,i-r));case 4:if(a=t.sent,o=a.done,u=a.value,!((r+=u.byteLength)0&&(c.push(f),s+=f.byteLength)),!(i||u<=s)){t.next=36;break}case 30:return t.next=32,l();case 32:d=t.sent,o=d.cmd,u=d.size;case 35:if(u=0;n--)t.addInt32(e[n]);return t.endVector()}},{key:"startTypeIdsVector",value:function(t,e){t.startVector(4,e,4)}},{key:"endUnion",value:function(t){return t.endObject()}},{key:"createUnion",value:function(t,n,r){return e.startUnion(t),e.addMode(t,n),e.addTypeIds(t,r),e.endUnion(t)}}]),e}();e.Union=n}(e.flatbuf||(e.flatbuf={}))}(e.arrow||(e.arrow={}))}(t.apache||(t.apache={}))}(Ye||(Ye={})),function(t){!function(t){!function(t){!function(t){var e=function(){function t(){F(this,t),this.bb=null,this.bb_pos=0}return E(t,[{key:"__init",value:function(t,e){return this.bb_pos=t,this.bb=e,this}},{key:"bitWidth",value:function(){var t=this.bb.__offset(this.bb_pos,4);return t?this.bb.readInt32(this.bb_pos+t):0}},{key:"isSigned",value:function(){var t=this.bb.__offset(this.bb_pos,6);return!!t&&!!this.bb.readInt8(this.bb_pos+t)}}],[{key:"getRootAsInt",value:function(e,n){return(n||new t).__init(e.readInt32(e.position())+e.position(),e)}},{key:"startInt",value:function(t){t.startObject(2)}},{key:"addBitWidth",value:function(t,e){t.addFieldInt32(0,e,0)}},{key:"addIsSigned",value:function(t,e){t.addFieldInt8(1,+e,0)}},{key:"endInt",value:function(t){return t.endObject()}},{key:"createInt",value:function(e,n,r){return t.startInt(e),t.addBitWidth(e,n),t.addIsSigned(e,r),t.endInt(e)}}]),t}();t.Int=e}(t.flatbuf||(t.flatbuf={}))}(t.arrow||(t.arrow={}))}(t.apache||(t.apache={}))}(Ye||(Ye={})),function(t){!function(e){!function(e){!function(e){var n=function(){function e(){F(this,e),this.bb=null,this.bb_pos=0}return E(e,[{key:"__init",value:function(t,e){return this.bb_pos=t,this.bb=e,this}},{key:"precision",value:function(){var e=this.bb.__offset(this.bb_pos,4);return e?this.bb.readInt16(this.bb_pos+e):t.apache.arrow.flatbuf.Precision.HALF}}],[{key:"getRootAsFloatingPoint",value:function(t,n){return(n||new e).__init(t.readInt32(t.position())+t.position(),t)}},{key:"startFloatingPoint",value:function(t){t.startObject(1)}},{key:"addPrecision",value:function(e,n){e.addFieldInt16(0,n,t.apache.arrow.flatbuf.Precision.HALF)}},{key:"endFloatingPoint",value:function(t){return t.endObject()}},{key:"createFloatingPoint",value:function(t,n){return e.startFloatingPoint(t),e.addPrecision(t,n),e.endFloatingPoint(t)}}]),e}();e.FloatingPoint=n}(e.flatbuf||(e.flatbuf={}))}(e.arrow||(e.arrow={}))}(t.apache||(t.apache={}))}(Ye||(Ye={})),function(t){!function(t){!function(t){!function(t){var e=function(){function t(){F(this,t),this.bb=null,this.bb_pos=0}return E(t,[{key:"__init",value:function(t,e){return this.bb_pos=t,this.bb=e,this}}],[{key:"getRootAsUtf8",value:function(e,n){return(n||new t).__init(e.readInt32(e.position())+e.position(),e)}},{key:"startUtf8",value:function(t){t.startObject(0)}},{key:"endUtf8",value:function(t){return t.endObject()}},{key:"createUtf8",value:function(e){return t.startUtf8(e),t.endUtf8(e)}}]),t}();t.Utf8=e}(t.flatbuf||(t.flatbuf={}))}(t.arrow||(t.arrow={}))}(t.apache||(t.apache={}))}(Ye||(Ye={})),function(t){!function(t){!function(t){!function(t){var e=function(){function t(){F(this,t),this.bb=null,this.bb_pos=0}return E(t,[{key:"__init",value:function(t,e){return this.bb_pos=t,this.bb=e,this}}],[{key:"getRootAsBinary",value:function(e,n){return(n||new t).__init(e.readInt32(e.position())+e.position(),e)}},{key:"startBinary",value:function(t){t.startObject(0)}},{key:"endBinary",value:function(t){return t.endObject()}},{key:"createBinary",value:function(e){return t.startBinary(e),t.endBinary(e)}}]),t}();t.Binary=e}(t.flatbuf||(t.flatbuf={}))}(t.arrow||(t.arrow={}))}(t.apache||(t.apache={}))}(Ye||(Ye={})),function(t){!function(t){!function(t){!function(t){var e=function(){function t(){F(this,t),this.bb=null,this.bb_pos=0}return E(t,[{key:"__init",value:function(t,e){return this.bb_pos=t,this.bb=e,this}}],[{key:"getRootAsLargeUtf8",value:function(e,n){return(n||new t).__init(e.readInt32(e.position())+e.position(),e)}},{key:"startLargeUtf8",value:function(t){t.startObject(0)}},{key:"endLargeUtf8",value:function(t){return t.endObject()}},{key:"createLargeUtf8",value:function(e){return t.startLargeUtf8(e),t.endLargeUtf8(e)}}]),t}();t.LargeUtf8=e}(t.flatbuf||(t.flatbuf={}))}(t.arrow||(t.arrow={}))}(t.apache||(t.apache={}))}(Ye||(Ye={})),function(t){!function(t){!function(t){!function(t){var e=function(){function t(){F(this,t),this.bb=null,this.bb_pos=0}return E(t,[{key:"__init",value:function(t,e){return this.bb_pos=t,this.bb=e,this}}],[{key:"getRootAsLargeBinary",value:function(e,n){return(n||new t).__init(e.readInt32(e.position())+e.position(),e)}},{key:"startLargeBinary",value:function(t){t.startObject(0)}},{key:"endLargeBinary",value:function(t){return t.endObject()}},{key:"createLargeBinary",value:function(e){return t.startLargeBinary(e),t.endLargeBinary(e)}}]),t}();t.LargeBinary=e}(t.flatbuf||(t.flatbuf={}))}(t.arrow||(t.arrow={}))}(t.apache||(t.apache={}))}(Ye||(Ye={})),function(t){!function(t){!function(t){!function(t){var e=function(){function t(){F(this,t),this.bb=null,this.bb_pos=0}return E(t,[{key:"__init",value:function(t,e){return this.bb_pos=t,this.bb=e,this}},{key:"byteWidth",value:function(){var t=this.bb.__offset(this.bb_pos,4);return t?this.bb.readInt32(this.bb_pos+t):0}}],[{key:"getRootAsFixedSizeBinary",value:function(e,n){return(n||new t).__init(e.readInt32(e.position())+e.position(),e)}},{key:"startFixedSizeBinary",value:function(t){t.startObject(1)}},{key:"addByteWidth",value:function(t,e){t.addFieldInt32(0,e,0)}},{key:"endFixedSizeBinary",value:function(t){return t.endObject()}},{key:"createFixedSizeBinary",value:function(e,n){return t.startFixedSizeBinary(e),t.addByteWidth(e,n),t.endFixedSizeBinary(e)}}]),t}();t.FixedSizeBinary=e}(t.flatbuf||(t.flatbuf={}))}(t.arrow||(t.arrow={}))}(t.apache||(t.apache={}))}(Ye||(Ye={})),function(t){!function(t){!function(t){!function(t){var e=function(){function t(){F(this,t),this.bb=null,this.bb_pos=0}return E(t,[{key:"__init",value:function(t,e){return this.bb_pos=t,this.bb=e,this}}],[{key:"getRootAsBool",value:function(e,n){return(n||new t).__init(e.readInt32(e.position())+e.position(),e)}},{key:"startBool",value:function(t){t.startObject(0)}},{key:"endBool",value:function(t){return t.endObject()}},{key:"createBool",value:function(e){return t.startBool(e),t.endBool(e)}}]),t}();t.Bool=e}(t.flatbuf||(t.flatbuf={}))}(t.arrow||(t.arrow={}))}(t.apache||(t.apache={}))}(Ye||(Ye={})),function(t){!function(t){!function(t){!function(t){var e=function(){function t(){F(this,t),this.bb=null,this.bb_pos=0}return E(t,[{key:"__init",value:function(t,e){return this.bb_pos=t,this.bb=e,this}},{key:"precision",value:function(){var t=this.bb.__offset(this.bb_pos,4);return t?this.bb.readInt32(this.bb_pos+t):0}},{key:"scale",value:function(){var t=this.bb.__offset(this.bb_pos,6);return t?this.bb.readInt32(this.bb_pos+t):0}}],[{key:"getRootAsDecimal",value:function(e,n){return(n||new t).__init(e.readInt32(e.position())+e.position(),e)}},{key:"startDecimal",value:function(t){t.startObject(2)}},{key:"addPrecision",value:function(t,e){t.addFieldInt32(0,e,0)}},{key:"addScale",value:function(t,e){t.addFieldInt32(1,e,0)}},{key:"endDecimal",value:function(t){return t.endObject()}},{key:"createDecimal",value:function(e,n,r){return t.startDecimal(e),t.addPrecision(e,n),t.addScale(e,r),t.endDecimal(e)}}]),t}();t.Decimal=e}(t.flatbuf||(t.flatbuf={}))}(t.arrow||(t.arrow={}))}(t.apache||(t.apache={}))}(Ye||(Ye={})),function(t){!function(e){!function(e){!function(e){var n=function(){function e(){F(this,e),this.bb=null,this.bb_pos=0}return E(e,[{key:"__init",value:function(t,e){return this.bb_pos=t,this.bb=e,this}},{key:"unit",value:function(){var e=this.bb.__offset(this.bb_pos,4);return e?this.bb.readInt16(this.bb_pos+e):t.apache.arrow.flatbuf.DateUnit.MILLISECOND}}],[{key:"getRootAsDate",value:function(t,n){return(n||new e).__init(t.readInt32(t.position())+t.position(),t)}},{key:"startDate",value:function(t){t.startObject(1)}},{key:"addUnit",value:function(e,n){e.addFieldInt16(0,n,t.apache.arrow.flatbuf.DateUnit.MILLISECOND)}},{key:"endDate",value:function(t){return t.endObject()}},{key:"createDate",value:function(t,n){return e.startDate(t),e.addUnit(t,n),e.endDate(t)}}]),e}();e.Date=n}(e.flatbuf||(e.flatbuf={}))}(e.arrow||(e.arrow={}))}(t.apache||(t.apache={}))}(Ye||(Ye={})),function(t){!function(e){!function(e){!function(e){var n=function(){function e(){F(this,e),this.bb=null,this.bb_pos=0}return E(e,[{key:"__init",value:function(t,e){return this.bb_pos=t,this.bb=e,this}},{key:"unit",value:function(){var e=this.bb.__offset(this.bb_pos,4);return e?this.bb.readInt16(this.bb_pos+e):t.apache.arrow.flatbuf.TimeUnit.MILLISECOND}},{key:"bitWidth",value:function(){var t=this.bb.__offset(this.bb_pos,6);return t?this.bb.readInt32(this.bb_pos+t):32}}],[{key:"getRootAsTime",value:function(t,n){return(n||new e).__init(t.readInt32(t.position())+t.position(),t)}},{key:"startTime",value:function(t){t.startObject(2)}},{key:"addUnit",value:function(e,n){e.addFieldInt16(0,n,t.apache.arrow.flatbuf.TimeUnit.MILLISECOND)}},{key:"addBitWidth",value:function(t,e){t.addFieldInt32(1,e,32)}},{key:"endTime",value:function(t){return t.endObject()}},{key:"createTime",value:function(t,n,r){return e.startTime(t),e.addUnit(t,n),e.addBitWidth(t,r),e.endTime(t)}}]),e}();e.Time=n}(e.flatbuf||(e.flatbuf={}))}(e.arrow||(e.arrow={}))}(t.apache||(t.apache={}))}(Ye||(Ye={})),function(t){!function(e){!function(e){!function(e){var n=function(){function e(){F(this,e),this.bb=null,this.bb_pos=0}return E(e,[{key:"__init",value:function(t,e){return this.bb_pos=t,this.bb=e,this}},{key:"unit",value:function(){var e=this.bb.__offset(this.bb_pos,4);return e?this.bb.readInt16(this.bb_pos+e):t.apache.arrow.flatbuf.TimeUnit.SECOND}},{key:"timezone",value:function(t){var e=this.bb.__offset(this.bb_pos,6);return e?this.bb.__string(this.bb_pos+e,t):null}}],[{key:"getRootAsTimestamp",value:function(t,n){return(n||new e).__init(t.readInt32(t.position())+t.position(),t)}},{key:"startTimestamp",value:function(t){t.startObject(2)}},{key:"addUnit",value:function(e,n){e.addFieldInt16(0,n,t.apache.arrow.flatbuf.TimeUnit.SECOND)}},{key:"addTimezone",value:function(t,e){t.addFieldOffset(1,e,0)}},{key:"endTimestamp",value:function(t){return t.endObject()}},{key:"createTimestamp",value:function(t,n,r){return e.startTimestamp(t),e.addUnit(t,n),e.addTimezone(t,r),e.endTimestamp(t)}}]),e}();e.Timestamp=n}(e.flatbuf||(e.flatbuf={}))}(e.arrow||(e.arrow={}))}(t.apache||(t.apache={}))}(Ye||(Ye={})),function(t){!function(e){!function(e){!function(e){var n=function(){function e(){F(this,e),this.bb=null,this.bb_pos=0}return E(e,[{key:"__init",value:function(t,e){return this.bb_pos=t,this.bb=e,this}},{key:"unit",value:function(){var e=this.bb.__offset(this.bb_pos,4);return e?this.bb.readInt16(this.bb_pos+e):t.apache.arrow.flatbuf.IntervalUnit.YEAR_MONTH}}],[{key:"getRootAsInterval",value:function(t,n){return(n||new e).__init(t.readInt32(t.position())+t.position(),t)}},{key:"startInterval",value:function(t){t.startObject(1)}},{key:"addUnit",value:function(e,n){e.addFieldInt16(0,n,t.apache.arrow.flatbuf.IntervalUnit.YEAR_MONTH)}},{key:"endInterval",value:function(t){return t.endObject()}},{key:"createInterval",value:function(t,n){return e.startInterval(t),e.addUnit(t,n),e.endInterval(t)}}]),e}();e.Interval=n}(e.flatbuf||(e.flatbuf={}))}(e.arrow||(e.arrow={}))}(t.apache||(t.apache={}))}(Ye||(Ye={})),function(t){!function(e){!function(e){!function(e){var n=function(){function e(){F(this,e),this.bb=null,this.bb_pos=0}return E(e,[{key:"__init",value:function(t,e){return this.bb_pos=t,this.bb=e,this}},{key:"unit",value:function(){var e=this.bb.__offset(this.bb_pos,4);return e?this.bb.readInt16(this.bb_pos+e):t.apache.arrow.flatbuf.TimeUnit.MILLISECOND}}],[{key:"getRootAsDuration",value:function(t,n){return(n||new e).__init(t.readInt32(t.position())+t.position(),t)}},{key:"startDuration",value:function(t){t.startObject(1)}},{key:"addUnit",value:function(e,n){e.addFieldInt16(0,n,t.apache.arrow.flatbuf.TimeUnit.MILLISECOND)}},{key:"endDuration",value:function(t){return t.endObject()}},{key:"createDuration",value:function(t,n){return e.startDuration(t),e.addUnit(t,n),e.endDuration(t)}}]),e}();e.Duration=n}(e.flatbuf||(e.flatbuf={}))}(e.arrow||(e.arrow={}))}(t.apache||(t.apache={}))}(Ye||(Ye={})),function(t){!function(t){!function(t){!function(t){var e=function(){function t(){F(this,t),this.bb=null,this.bb_pos=0}return E(t,[{key:"__init",value:function(t,e){return this.bb_pos=t,this.bb=e,this}},{key:"key",value:function(t){var e=this.bb.__offset(this.bb_pos,4);return e?this.bb.__string(this.bb_pos+e,t):null}},{key:"value",value:function(t){var e=this.bb.__offset(this.bb_pos,6);return e?this.bb.__string(this.bb_pos+e,t):null}}],[{key:"getRootAsKeyValue",value:function(e,n){return(n||new t).__init(e.readInt32(e.position())+e.position(),e)}},{key:"startKeyValue",value:function(t){t.startObject(2)}},{key:"addKey",value:function(t,e){t.addFieldOffset(0,e,0)}},{key:"addValue",value:function(t,e){t.addFieldOffset(1,e,0)}},{key:"endKeyValue",value:function(t){return t.endObject()}},{key:"createKeyValue",value:function(e,n,r){return t.startKeyValue(e),t.addKey(e,n),t.addValue(e,r),t.endKeyValue(e)}}]),t}();t.KeyValue=e}(t.flatbuf||(t.flatbuf={}))}(t.arrow||(t.arrow={}))}(t.apache||(t.apache={}))}(Ye||(Ye={})),function(t){!function(e){!function(e){!function(e){var n=function(){function e(){F(this,e),this.bb=null,this.bb_pos=0}return E(e,[{key:"__init",value:function(t,e){return this.bb_pos=t,this.bb=e,this}},{key:"id",value:function(){var t=this.bb.__offset(this.bb_pos,4);return t?this.bb.readInt64(this.bb_pos+t):this.bb.createLong(0,0)}},{key:"indexType",value:function(e){var n=this.bb.__offset(this.bb_pos,6);return n?(e||new t.apache.arrow.flatbuf.Int).__init(this.bb.__indirect(this.bb_pos+n),this.bb):null}},{key:"isOrdered",value:function(){var t=this.bb.__offset(this.bb_pos,8);return!!t&&!!this.bb.readInt8(this.bb_pos+t)}}],[{key:"getRootAsDictionaryEncoding",value:function(t,n){return(n||new e).__init(t.readInt32(t.position())+t.position(),t)}},{key:"startDictionaryEncoding",value:function(t){t.startObject(3)}},{key:"addId",value:function(t,e){t.addFieldInt64(0,e,t.createLong(0,0))}},{key:"addIndexType",value:function(t,e){t.addFieldOffset(1,e,0)}},{key:"addIsOrdered",value:function(t,e){t.addFieldInt8(2,+e,0)}},{key:"endDictionaryEncoding",value:function(t){return t.endObject()}},{key:"createDictionaryEncoding",value:function(t,n,r,i){return e.startDictionaryEncoding(t),e.addId(t,n),e.addIndexType(t,r),e.addIsOrdered(t,i),e.endDictionaryEncoding(t)}}]),e}();e.DictionaryEncoding=n}(e.flatbuf||(e.flatbuf={}))}(e.arrow||(e.arrow={}))}(t.apache||(t.apache={}))}(Ye||(Ye={})),function(t){!function(e){!function(e){!function(e){var n=function(){function e(){F(this,e),this.bb=null,this.bb_pos=0}return E(e,[{key:"__init",value:function(t,e){return this.bb_pos=t,this.bb=e,this}},{key:"name",value:function(t){var e=this.bb.__offset(this.bb_pos,4);return e?this.bb.__string(this.bb_pos+e,t):null}},{key:"nullable",value:function(){var t=this.bb.__offset(this.bb_pos,6);return!!t&&!!this.bb.readInt8(this.bb_pos+t)}},{key:"typeType",value:function(){var e=this.bb.__offset(this.bb_pos,8);return e?this.bb.readUint8(this.bb_pos+e):t.apache.arrow.flatbuf.Type.NONE}},{key:"type",value:function(t){var e=this.bb.__offset(this.bb_pos,10);return e?this.bb.__union(t,this.bb_pos+e):null}},{key:"dictionary",value:function(e){var n=this.bb.__offset(this.bb_pos,12);return n?(e||new t.apache.arrow.flatbuf.DictionaryEncoding).__init(this.bb.__indirect(this.bb_pos+n),this.bb):null}},{key:"children",value:function(e,n){var r=this.bb.__offset(this.bb_pos,14);return r?(n||new t.apache.arrow.flatbuf.Field).__init(this.bb.__indirect(this.bb.__vector(this.bb_pos+r)+4*e),this.bb):null}},{key:"childrenLength",value:function(){var t=this.bb.__offset(this.bb_pos,14);return t?this.bb.__vector_len(this.bb_pos+t):0}},{key:"customMetadata",value:function(e,n){var r=this.bb.__offset(this.bb_pos,16);return r?(n||new t.apache.arrow.flatbuf.KeyValue).__init(this.bb.__indirect(this.bb.__vector(this.bb_pos+r)+4*e),this.bb):null}},{key:"customMetadataLength",value:function(){var t=this.bb.__offset(this.bb_pos,16);return t?this.bb.__vector_len(this.bb_pos+t):0}}],[{key:"getRootAsField",value:function(t,n){return(n||new e).__init(t.readInt32(t.position())+t.position(),t)}},{key:"startField",value:function(t){t.startObject(7)}},{key:"addName",value:function(t,e){t.addFieldOffset(0,e,0)}},{key:"addNullable",value:function(t,e){t.addFieldInt8(1,+e,0)}},{key:"addTypeType",value:function(e,n){e.addFieldInt8(2,n,t.apache.arrow.flatbuf.Type.NONE)}},{key:"addType",value:function(t,e){t.addFieldOffset(3,e,0)}},{key:"addDictionary",value:function(t,e){t.addFieldOffset(4,e,0)}},{key:"addChildren",value:function(t,e){t.addFieldOffset(5,e,0)}},{key:"createChildrenVector",value:function(t,e){t.startVector(4,e.length,4);for(var n=e.length-1;n>=0;n--)t.addOffset(e[n]);return t.endVector()}},{key:"startChildrenVector",value:function(t,e){t.startVector(4,e,4)}},{key:"addCustomMetadata",value:function(t,e){t.addFieldOffset(6,e,0)}},{key:"createCustomMetadataVector",value:function(t,e){t.startVector(4,e.length,4);for(var n=e.length-1;n>=0;n--)t.addOffset(e[n]);return t.endVector()}},{key:"startCustomMetadataVector",value:function(t,e){t.startVector(4,e,4)}},{key:"endField",value:function(t){return t.endObject()}},{key:"createField",value:function(t,n,r,i,a,o,u,s){return e.startField(t),e.addName(t,n),e.addNullable(t,r),e.addTypeType(t,i),e.addType(t,a),e.addDictionary(t,o),e.addChildren(t,u),e.addCustomMetadata(t,s),e.endField(t)}}]),e}();e.Field=n}(e.flatbuf||(e.flatbuf={}))}(e.arrow||(e.arrow={}))}(t.apache||(t.apache={}))}(Ye||(Ye={})),function(t){!function(t){!function(t){!function(t){var e=function(){function t(){F(this,t),this.bb=null,this.bb_pos=0}return E(t,[{key:"__init",value:function(t,e){return this.bb_pos=t,this.bb=e,this}},{key:"offset",value:function(){return this.bb.readInt64(this.bb_pos)}},{key:"length",value:function(){return this.bb.readInt64(this.bb_pos+8)}}],[{key:"createBuffer",value:function(t,e,n){return t.prep(8,16),t.writeInt64(n),t.writeInt64(e),t.offset()}}]),t}();t.Buffer=e}(t.flatbuf||(t.flatbuf={}))}(t.arrow||(t.arrow={}))}(t.apache||(t.apache={}))}(Ye||(Ye={})),function(t){!function(e){!function(e){!function(e){var n=function(){function e(){F(this,e),this.bb=null,this.bb_pos=0}return E(e,[{key:"__init",value:function(t,e){return this.bb_pos=t,this.bb=e,this}},{key:"endianness",value:function(){var e=this.bb.__offset(this.bb_pos,4);return e?this.bb.readInt16(this.bb_pos+e):t.apache.arrow.flatbuf.Endianness.Little}},{key:"fields",value:function(e,n){var r=this.bb.__offset(this.bb_pos,6);return r?(n||new t.apache.arrow.flatbuf.Field).__init(this.bb.__indirect(this.bb.__vector(this.bb_pos+r)+4*e),this.bb):null}},{key:"fieldsLength",value:function(){var t=this.bb.__offset(this.bb_pos,6);return t?this.bb.__vector_len(this.bb_pos+t):0}},{key:"customMetadata",value:function(e,n){var r=this.bb.__offset(this.bb_pos,8);return r?(n||new t.apache.arrow.flatbuf.KeyValue).__init(this.bb.__indirect(this.bb.__vector(this.bb_pos+r)+4*e),this.bb):null}},{key:"customMetadataLength",value:function(){var t=this.bb.__offset(this.bb_pos,8);return t?this.bb.__vector_len(this.bb_pos+t):0}}],[{key:"getRootAsSchema",value:function(t,n){return(n||new e).__init(t.readInt32(t.position())+t.position(),t)}},{key:"startSchema",value:function(t){t.startObject(3)}},{key:"addEndianness",value:function(e,n){e.addFieldInt16(0,n,t.apache.arrow.flatbuf.Endianness.Little)}},{key:"addFields",value:function(t,e){t.addFieldOffset(1,e,0)}},{key:"createFieldsVector",value:function(t,e){t.startVector(4,e.length,4);for(var n=e.length-1;n>=0;n--)t.addOffset(e[n]);return t.endVector()}},{key:"startFieldsVector",value:function(t,e){t.startVector(4,e,4)}},{key:"addCustomMetadata",value:function(t,e){t.addFieldOffset(2,e,0)}},{key:"createCustomMetadataVector",value:function(t,e){t.startVector(4,e.length,4);for(var n=e.length-1;n>=0;n--)t.addOffset(e[n]);return t.endVector()}},{key:"startCustomMetadataVector",value:function(t,e){t.startVector(4,e,4)}},{key:"endSchema",value:function(t){return t.endObject()}},{key:"finishSchemaBuffer",value:function(t,e){t.finish(e)}},{key:"createSchema",value:function(t,n,r,i){return e.startSchema(t),e.addEndianness(t,n),e.addFields(t,r),e.addCustomMetadata(t,i),e.endSchema(t)}}]),e}();e.Schema=n}(e.flatbuf||(e.flatbuf={}))}(e.arrow||(e.arrow={}))}(t.apache||(t.apache={}))}(Ye||(Ye={})),function(t){!function(t){!function(t){!function(t){t.Schema=Ye.apache.arrow.flatbuf.Schema}(t.flatbuf||(t.flatbuf={}))}(t.arrow||(t.arrow={}))}(t.apache||(t.apache={}))}(Ge||(Ge={})),function(t){!function(t){!function(t){!function(t){!function(t){t[t.NONE=0]="NONE",t[t.Schema=1]="Schema",t[t.DictionaryBatch=2]="DictionaryBatch",t[t.RecordBatch=3]="RecordBatch",t[t.Tensor=4]="Tensor",t[t.SparseTensor=5]="SparseTensor"}(t.MessageHeader||(t.MessageHeader={}))}(t.flatbuf||(t.flatbuf={}))}(t.arrow||(t.arrow={}))}(t.apache||(t.apache={}))}(Ge||(Ge={})),function(t){!function(t){!function(t){!function(t){var e=function(){function t(){F(this,t),this.bb=null,this.bb_pos=0}return E(t,[{key:"__init",value:function(t,e){return this.bb_pos=t,this.bb=e,this}},{key:"length",value:function(){return this.bb.readInt64(this.bb_pos)}},{key:"nullCount",value:function(){return this.bb.readInt64(this.bb_pos+8)}}],[{key:"createFieldNode",value:function(t,e,n){return t.prep(8,16),t.writeInt64(n),t.writeInt64(e),t.offset()}}]),t}();t.FieldNode=e}(t.flatbuf||(t.flatbuf={}))}(t.arrow||(t.arrow={}))}(t.apache||(t.apache={}))}(Ge||(Ge={})),function(t){!function(e){!function(e){!function(e){var n=function(){function e(){F(this,e),this.bb=null,this.bb_pos=0}return E(e,[{key:"__init",value:function(t,e){return this.bb_pos=t,this.bb=e,this}},{key:"length",value:function(){var t=this.bb.__offset(this.bb_pos,4);return t?this.bb.readInt64(this.bb_pos+t):this.bb.createLong(0,0)}},{key:"nodes",value:function(e,n){var r=this.bb.__offset(this.bb_pos,6);return r?(n||new t.apache.arrow.flatbuf.FieldNode).__init(this.bb.__vector(this.bb_pos+r)+16*e,this.bb):null}},{key:"nodesLength",value:function(){var t=this.bb.__offset(this.bb_pos,6);return t?this.bb.__vector_len(this.bb_pos+t):0}},{key:"buffers",value:function(t,e){var n=this.bb.__offset(this.bb_pos,8);return n?(e||new Ye.apache.arrow.flatbuf.Buffer).__init(this.bb.__vector(this.bb_pos+n)+16*t,this.bb):null}},{key:"buffersLength",value:function(){var t=this.bb.__offset(this.bb_pos,8);return t?this.bb.__vector_len(this.bb_pos+t):0}}],[{key:"getRootAsRecordBatch",value:function(t,n){return(n||new e).__init(t.readInt32(t.position())+t.position(),t)}},{key:"startRecordBatch",value:function(t){t.startObject(3)}},{key:"addLength",value:function(t,e){t.addFieldInt64(0,e,t.createLong(0,0))}},{key:"addNodes",value:function(t,e){t.addFieldOffset(1,e,0)}},{key:"startNodesVector",value:function(t,e){t.startVector(16,e,8)}},{key:"addBuffers",value:function(t,e){t.addFieldOffset(2,e,0)}},{key:"startBuffersVector",value:function(t,e){t.startVector(16,e,8)}},{key:"endRecordBatch",value:function(t){return t.endObject()}},{key:"createRecordBatch",value:function(t,n,r,i){return e.startRecordBatch(t),e.addLength(t,n),e.addNodes(t,r),e.addBuffers(t,i),e.endRecordBatch(t)}}]),e}();e.RecordBatch=n}(e.flatbuf||(e.flatbuf={}))}(e.arrow||(e.arrow={}))}(t.apache||(t.apache={}))}(Ge||(Ge={})),function(t){!function(e){!function(e){!function(e){var n=function(){function e(){F(this,e),this.bb=null,this.bb_pos=0}return E(e,[{key:"__init",value:function(t,e){return this.bb_pos=t,this.bb=e,this}},{key:"id",value:function(){var t=this.bb.__offset(this.bb_pos,4);return t?this.bb.readInt64(this.bb_pos+t):this.bb.createLong(0,0)}},{key:"data",value:function(e){var n=this.bb.__offset(this.bb_pos,6);return n?(e||new t.apache.arrow.flatbuf.RecordBatch).__init(this.bb.__indirect(this.bb_pos+n),this.bb):null}},{key:"isDelta",value:function(){var t=this.bb.__offset(this.bb_pos,8);return!!t&&!!this.bb.readInt8(this.bb_pos+t)}}],[{key:"getRootAsDictionaryBatch",value:function(t,n){return(n||new e).__init(t.readInt32(t.position())+t.position(),t)}},{key:"startDictionaryBatch",value:function(t){t.startObject(3)}},{key:"addId",value:function(t,e){t.addFieldInt64(0,e,t.createLong(0,0))}},{key:"addData",value:function(t,e){t.addFieldOffset(1,e,0)}},{key:"addIsDelta",value:function(t,e){t.addFieldInt8(2,+e,0)}},{key:"endDictionaryBatch",value:function(t){return t.endObject()}},{key:"createDictionaryBatch",value:function(t,n,r,i){return e.startDictionaryBatch(t),e.addId(t,n),e.addData(t,r),e.addIsDelta(t,i),e.endDictionaryBatch(t)}}]),e}();e.DictionaryBatch=n}(e.flatbuf||(e.flatbuf={}))}(e.arrow||(e.arrow={}))}(t.apache||(t.apache={}))}(Ge||(Ge={})),function(t){!function(e){!function(e){!function(e){var n=function(){function e(){F(this,e),this.bb=null,this.bb_pos=0}return E(e,[{key:"__init",value:function(t,e){return this.bb_pos=t,this.bb=e,this}},{key:"version",value:function(){var t=this.bb.__offset(this.bb_pos,4);return t?this.bb.readInt16(this.bb_pos+t):Ye.apache.arrow.flatbuf.MetadataVersion.V1}},{key:"headerType",value:function(){var e=this.bb.__offset(this.bb_pos,6);return e?this.bb.readUint8(this.bb_pos+e):t.apache.arrow.flatbuf.MessageHeader.NONE}},{key:"header",value:function(t){var e=this.bb.__offset(this.bb_pos,8);return e?this.bb.__union(t,this.bb_pos+e):null}},{key:"bodyLength",value:function(){var t=this.bb.__offset(this.bb_pos,10);return t?this.bb.readInt64(this.bb_pos+t):this.bb.createLong(0,0)}},{key:"customMetadata",value:function(t,e){var n=this.bb.__offset(this.bb_pos,12);return n?(e||new Ye.apache.arrow.flatbuf.KeyValue).__init(this.bb.__indirect(this.bb.__vector(this.bb_pos+n)+4*t),this.bb):null}},{key:"customMetadataLength",value:function(){var t=this.bb.__offset(this.bb_pos,12);return t?this.bb.__vector_len(this.bb_pos+t):0}}],[{key:"getRootAsMessage",value:function(t,n){return(n||new e).__init(t.readInt32(t.position())+t.position(),t)}},{key:"startMessage",value:function(t){t.startObject(5)}},{key:"addVersion",value:function(t,e){t.addFieldInt16(0,e,Ye.apache.arrow.flatbuf.MetadataVersion.V1)}},{key:"addHeaderType",value:function(e,n){e.addFieldInt8(1,n,t.apache.arrow.flatbuf.MessageHeader.NONE)}},{key:"addHeader",value:function(t,e){t.addFieldOffset(2,e,0)}},{key:"addBodyLength",value:function(t,e){t.addFieldInt64(3,e,t.createLong(0,0))}},{key:"addCustomMetadata",value:function(t,e){t.addFieldOffset(4,e,0)}},{key:"createCustomMetadataVector",value:function(t,e){t.startVector(4,e.length,4);for(var n=e.length-1;n>=0;n--)t.addOffset(e[n]);return t.endVector()}},{key:"startCustomMetadataVector",value:function(t,e){t.startVector(4,e,4)}},{key:"endMessage",value:function(t){return t.endObject()}},{key:"finishMessageBuffer",value:function(t,e){t.finish(e)}},{key:"createMessage",value:function(t,n,r,i,a,o){return e.startMessage(t),e.addVersion(t,n),e.addHeaderType(t,r),e.addHeader(t,i),e.addBodyLength(t,a),e.addCustomMetadata(t,o),e.endMessage(t)}}]),e}();e.Message=n}(e.flatbuf||(e.flatbuf={}))}(e.arrow||(e.arrow={}))}(t.apache||(t.apache={}))}(Ge||(Ge={}));Ye.apache.arrow.flatbuf.Type;var Je,Ze,Qe=Ye.apache.arrow.flatbuf.DateUnit,Xe=Ye.apache.arrow.flatbuf.TimeUnit,tn=Ye.apache.arrow.flatbuf.Precision,en=Ye.apache.arrow.flatbuf.UnionMode,nn=Ye.apache.arrow.flatbuf.IntervalUnit,rn=Ge.apache.arrow.flatbuf.MessageHeader,an=Ye.apache.arrow.flatbuf.MetadataVersion;!function(t){t[t.NONE=0]="NONE",t[t.Null=1]="Null",t[t.Int=2]="Int",t[t.Float=3]="Float",t[t.Binary=4]="Binary",t[t.Utf8=5]="Utf8",t[t.Bool=6]="Bool",t[t.Decimal=7]="Decimal",t[t.Date=8]="Date",t[t.Time=9]="Time",t[t.Timestamp=10]="Timestamp",t[t.Interval=11]="Interval",t[t.List=12]="List",t[t.Struct=13]="Struct",t[t.Union=14]="Union",t[t.FixedSizeBinary=15]="FixedSizeBinary",t[t.FixedSizeList=16]="FixedSizeList",t[t.Map=17]="Map",t[t.Dictionary=-1]="Dictionary",t[t.Int8=-2]="Int8",t[t.Int16=-3]="Int16",t[t.Int32=-4]="Int32",t[t.Int64=-5]="Int64",t[t.Uint8=-6]="Uint8",t[t.Uint16=-7]="Uint16",t[t.Uint32=-8]="Uint32",t[t.Uint64=-9]="Uint64",t[t.Float16=-10]="Float16",t[t.Float32=-11]="Float32",t[t.Float64=-12]="Float64",t[t.DateDay=-13]="DateDay",t[t.DateMillisecond=-14]="DateMillisecond",t[t.TimestampSecond=-15]="TimestampSecond",t[t.TimestampMillisecond=-16]="TimestampMillisecond",t[t.TimestampMicrosecond=-17]="TimestampMicrosecond",t[t.TimestampNanosecond=-18]="TimestampNanosecond",t[t.TimeSecond=-19]="TimeSecond",t[t.TimeMillisecond=-20]="TimeMillisecond",t[t.TimeMicrosecond=-21]="TimeMicrosecond",t[t.TimeNanosecond=-22]="TimeNanosecond",t[t.DenseUnion=-23]="DenseUnion",t[t.SparseUnion=-24]="SparseUnion",t[t.IntervalDayTime=-25]="IntervalDayTime",t[t.IntervalYearMonth=-26]="IntervalYearMonth"}(Je||(Je={})),function(t){t[t.OFFSET=0]="OFFSET",t[t.DATA=1]="DATA",t[t.VALIDITY=2]="VALIDITY",t[t.TYPE=3]="TYPE"}(Ze||(Ze={}));var on=R.mark(hn);function un(t,e,n,r){return 0!==(n&1<>r}function cn(t,e,n){return n?!!(t[e>>3]|=1<>3]&=~(1<0||n.byteLength>3):ln(hn(n,t,e,null,un)).subarray(0,r)),i}return n}function ln(t){var e,n=[],r=0,i=0,a=0,o=O(t);try{for(o.s();!(e=o.n()).done;){e.value&&(a|=1<0)&&(n[r++]=a);var u=new Uint8Array(n.length+7&-8);return u.set(n),u}function hn(t,e,n,r,i){var a,o,u,s,c;return R.wrap((function(f){for(;;)switch(f.prev=f.next){case 0:a=e%8,o=e>>3,u=0,s=n;case 3:if(!(s>0)){f.next=11;break}c=t[o++];case 5:return f.next=7,i(r,u++,c,a);case 7:if(--s>0&&++a<8){f.next=5;break}case 8:a=0,f.next=3;break;case 11:case"end":return f.stop()}}),on)}function yn(t,e,n){if(n-e<=0)return 0;if(n-e<8){var r,i=0,a=O(hn(t,e,n-e,t,sn));try{for(a.s();!(r=a.n()).done;){i+=r.value}}catch(s){a.e(s)}finally{a.f()}return i}var o=n>>3<<3,u=e+(e%8===0?0:8-e%8);return yn(t,e,u)+yn(t,o,n)+pn(t,u>>3,o-u>>3)}function pn(t,e,n){for(var r=0,i=0|e,a=new DataView(t.buffer,t.byteOffset,t.byteLength),o=void 0===n?t.byteLength:i+n;o-i>=4;)r+=dn(a.getUint32(i)),i+=4;for(;o-i>=2;)r+=dn(a.getUint16(i)),i+=2;for(;o-i>=1;)r+=dn(a.getUint8(i)),i+=1;return r}function dn(t){var e=0|t;return 16843009*((e=(858993459&(e-=e>>>1&1431655765))+(e>>>2&858993459))+(e>>>4)&252645135)>>>24}function vn(t){return function(t){if(Array.isArray(t))return T(t)}(t)||function(t){if("undefined"!==typeof Symbol&&null!=t[Symbol.iterator]||null!=t["@@iterator"])return Array.from(t)}(t)||B(t)||function(){throw new TypeError("Invalid attempt to spread non-iterable instance.\nIn order to be iterable, non-array objects must have a [Symbol.iterator]() method.")}()}var bn=function(){function t(){F(this,t)}return E(t,[{key:"visitMany",value:function(t){for(var e=this,n=arguments.length,r=new Array(n>1?n-1:0),i=1;i1&&void 0!==arguments[1])||arguments[1];return gn(this,t,e)}},{key:"visitNull",value:function(t){return null}},{key:"visitBool",value:function(t){return null}},{key:"visitInt",value:function(t){return null}},{key:"visitFloat",value:function(t){return null}},{key:"visitUtf8",value:function(t){return null}},{key:"visitBinary",value:function(t){return null}},{key:"visitFixedSizeBinary",value:function(t){return null}},{key:"visitDate",value:function(t){return null}},{key:"visitTimestamp",value:function(t){return null}},{key:"visitTime",value:function(t){return null}},{key:"visitDecimal",value:function(t){return null}},{key:"visitList",value:function(t){return null}},{key:"visitStruct",value:function(t){return null}},{key:"visitUnion",value:function(t){return null}},{key:"visitDictionary",value:function(t){return null}},{key:"visitInterval",value:function(t){return null}},{key:"visitFixedSizeList",value:function(t){return null}},{key:"visitMap",value:function(t){return null}}]),t}();function gn(t,e){var n=!(arguments.length>2&&void 0!==arguments[2])||arguments[2],r=null,i=Je.NONE;switch(e instanceof yr||e instanceof qe?i=mn(e.type):e instanceof Fn?i=mn(e):"number"!==typeof(i=e)&&(i=Je[e]),i){case Je.Null:r=t.visitNull;break;case Je.Bool:r=t.visitBool;break;case Je.Int:r=t.visitInt;break;case Je.Int8:r=t.visitInt8||t.visitInt;break;case Je.Int16:r=t.visitInt16||t.visitInt;break;case Je.Int32:r=t.visitInt32||t.visitInt;break;case Je.Int64:r=t.visitInt64||t.visitInt;break;case Je.Uint8:r=t.visitUint8||t.visitInt;break;case Je.Uint16:r=t.visitUint16||t.visitInt;break;case Je.Uint32:r=t.visitUint32||t.visitInt;break;case Je.Uint64:r=t.visitUint64||t.visitInt;break;case Je.Float:r=t.visitFloat;break;case Je.Float16:r=t.visitFloat16||t.visitFloat;break;case Je.Float32:r=t.visitFloat32||t.visitFloat;break;case Je.Float64:r=t.visitFloat64||t.visitFloat;break;case Je.Utf8:r=t.visitUtf8;break;case Je.Binary:r=t.visitBinary;break;case Je.FixedSizeBinary:r=t.visitFixedSizeBinary;break;case Je.Date:r=t.visitDate;break;case Je.DateDay:r=t.visitDateDay||t.visitDate;break;case Je.DateMillisecond:r=t.visitDateMillisecond||t.visitDate;break;case Je.Timestamp:r=t.visitTimestamp;break;case Je.TimestampSecond:r=t.visitTimestampSecond||t.visitTimestamp;break;case Je.TimestampMillisecond:r=t.visitTimestampMillisecond||t.visitTimestamp;break;case Je.TimestampMicrosecond:r=t.visitTimestampMicrosecond||t.visitTimestamp;break;case Je.TimestampNanosecond:r=t.visitTimestampNanosecond||t.visitTimestamp;break;case Je.Time:r=t.visitTime;break;case Je.TimeSecond:r=t.visitTimeSecond||t.visitTime;break;case Je.TimeMillisecond:r=t.visitTimeMillisecond||t.visitTime;break;case Je.TimeMicrosecond:r=t.visitTimeMicrosecond||t.visitTime;break;case Je.TimeNanosecond:r=t.visitTimeNanosecond||t.visitTime;break;case Je.Decimal:r=t.visitDecimal;break;case Je.List:r=t.visitList;break;case Je.Struct:r=t.visitStruct;break;case Je.Union:r=t.visitUnion;break;case Je.DenseUnion:r=t.visitDenseUnion||t.visitUnion;break;case Je.SparseUnion:r=t.visitSparseUnion||t.visitUnion;break;case Je.Dictionary:r=t.visitDictionary;break;case Je.Interval:r=t.visitInterval;break;case Je.IntervalDayTime:r=t.visitIntervalDayTime||t.visitInterval;break;case Je.IntervalYearMonth:r=t.visitIntervalYearMonth||t.visitInterval;break;case Je.FixedSizeList:r=t.visitFixedSizeList;break;case Je.Map:r=t.visitMap}if("function"===typeof r)return r;if(!n)return function(){return null};throw new Error("Unrecognized type '".concat(Je[i],"'"))}function mn(t){switch(t.typeId){case Je.Null:return Je.Null;case Je.Int:var e=t.bitWidth,n=t.isSigned;switch(e){case 8:return n?Je.Int8:Je.Uint8;case 16:return n?Je.Int16:Je.Uint16;case 32:return n?Je.Int32:Je.Uint32;case 64:return n?Je.Int64:Je.Uint64}return Je.Int;case Je.Float:switch(t.precision){case tn.HALF:return Je.Float16;case tn.SINGLE:return Je.Float32;case tn.DOUBLE:return Je.Float64}return Je.Float;case Je.Binary:return Je.Binary;case Je.Utf8:return Je.Utf8;case Je.Bool:return Je.Bool;case Je.Decimal:return Je.Decimal;case Je.Time:switch(t.unit){case Xe.SECOND:return Je.TimeSecond;case Xe.MILLISECOND:return Je.TimeMillisecond;case Xe.MICROSECOND:return Je.TimeMicrosecond;case Xe.NANOSECOND:return Je.TimeNanosecond}return Je.Time;case Je.Timestamp:switch(t.unit){case Xe.SECOND:return Je.TimestampSecond;case Xe.MILLISECOND:return Je.TimestampMillisecond;case Xe.MICROSECOND:return Je.TimestampMicrosecond;case Xe.NANOSECOND:return Je.TimestampNanosecond}return Je.Timestamp;case Je.Date:switch(t.unit){case Qe.DAY:return Je.DateDay;case Qe.MILLISECOND:return Je.DateMillisecond}return Je.Date;case Je.Interval:switch(t.unit){case nn.DAY_TIME:return Je.IntervalDayTime;case nn.YEAR_MONTH:return Je.IntervalYearMonth}return Je.Interval;case Je.Map:return Je.Map;case Je.List:return Je.List;case Je.Struct:return Je.Struct;case Je.Union:switch(t.mode){case en.Dense:return Je.DenseUnion;case en.Sparse:return Je.SparseUnion}return Je.Union;case Je.FixedSizeBinary:return Je.FixedSizeBinary;case Je.FixedSizeList:return Je.FixedSizeList;case Je.Dictionary:return Je.Dictionary}throw new Error("Unrecognized type '".concat(Je[t.typeId],"'"))}bn.prototype.visitInt8=null,bn.prototype.visitInt16=null,bn.prototype.visitInt32=null,bn.prototype.visitInt64=null,bn.prototype.visitUint8=null,bn.prototype.visitUint16=null,bn.prototype.visitUint32=null,bn.prototype.visitUint64=null,bn.prototype.visitFloat16=null,bn.prototype.visitFloat32=null,bn.prototype.visitFloat64=null,bn.prototype.visitDateDay=null,bn.prototype.visitDateMillisecond=null,bn.prototype.visitTimestampSecond=null,bn.prototype.visitTimestampMillisecond=null,bn.prototype.visitTimestampMicrosecond=null,bn.prototype.visitTimestampNanosecond=null,bn.prototype.visitTimeSecond=null,bn.prototype.visitTimeMillisecond=null,bn.prototype.visitTimeMicrosecond=null,bn.prototype.visitTimeNanosecond=null,bn.prototype.visitDenseUnion=null,bn.prototype.visitSparseUnion=null,bn.prototype.visitIntervalDayTime=null,bn.prototype.visitIntervalYearMonth=null;var kn=function(t){ot(n,t);var e=yt(n);function n(){return F(this,n),e.apply(this,arguments)}return E(n,[{key:"compareSchemas",value:function(t,e){return t===e||e instanceof t.constructor&&Ln.compareFields(t.fields,e.fields)}},{key:"compareFields",value:function(t,e){return t===e||Array.isArray(t)&&Array.isArray(e)&&t.length===e.length&&t.every((function(t,n){return Ln.compareField(t,e[n])}))}},{key:"compareField",value:function(t,e){return t===e||e instanceof t.constructor&&t.name===e.name&&t.nullable===e.nullable&&Ln.visit(t.type,e.type)}}]),n}(bn);function wn(t,e){return e instanceof t.constructor}function _n(t,e){return t===e||wn(t,e)}function In(t,e){return t===e||wn(t,e)&&t.bitWidth===e.bitWidth&&t.isSigned===e.isSigned}function Sn(t,e){return t===e||wn(t,e)&&t.precision===e.precision}function xn(t,e){return t===e||wn(t,e)&&t.unit===e.unit}function An(t,e){return t===e||wn(t,e)&&t.unit===e.unit&&t.timezone===e.timezone}function Tn(t,e){return t===e||wn(t,e)&&t.unit===e.unit&&t.bitWidth===e.bitWidth}function Bn(t,e){return t===e||wn(t,e)&&t.mode===e.mode&&t.typeIds.every((function(t,n){return t===e.typeIds[n]}))&&Ln.compareFields(t.children,e.children)}function On(t,e){return t===e||wn(t,e)&&t.unit===e.unit}kn.prototype.visitNull=_n,kn.prototype.visitBool=_n,kn.prototype.visitInt=In,kn.prototype.visitInt8=In,kn.prototype.visitInt16=In,kn.prototype.visitInt32=In,kn.prototype.visitInt64=In,kn.prototype.visitUint8=In,kn.prototype.visitUint16=In,kn.prototype.visitUint32=In,kn.prototype.visitUint64=In,kn.prototype.visitFloat=Sn,kn.prototype.visitFloat16=Sn,kn.prototype.visitFloat32=Sn,kn.prototype.visitFloat64=Sn,kn.prototype.visitUtf8=_n,kn.prototype.visitBinary=_n,kn.prototype.visitFixedSizeBinary=function(t,e){return t===e||wn(t,e)&&t.byteWidth===e.byteWidth},kn.prototype.visitDate=xn,kn.prototype.visitDateDay=xn,kn.prototype.visitDateMillisecond=xn,kn.prototype.visitTimestamp=An,kn.prototype.visitTimestampSecond=An,kn.prototype.visitTimestampMillisecond=An,kn.prototype.visitTimestampMicrosecond=An,kn.prototype.visitTimestampNanosecond=An,kn.prototype.visitTime=Tn,kn.prototype.visitTimeSecond=Tn,kn.prototype.visitTimeMillisecond=Tn,kn.prototype.visitTimeMicrosecond=Tn,kn.prototype.visitTimeNanosecond=Tn,kn.prototype.visitDecimal=_n,kn.prototype.visitList=function(t,e){return t===e||wn(t,e)&&t.children.length===e.children.length&&Ln.compareFields(t.children,e.children)},kn.prototype.visitStruct=function(t,e){return t===e||wn(t,e)&&t.children.length===e.children.length&&Ln.compareFields(t.children,e.children)},kn.prototype.visitUnion=Bn,kn.prototype.visitDenseUnion=Bn,kn.prototype.visitSparseUnion=Bn,kn.prototype.visitDictionary=function(t,e){return t===e||wn(t,e)&&t.id===e.id&&t.isOrdered===e.isOrdered&&Ln.visit(t.indices,e.indices)&&Ln.visit(t.dictionary,e.dictionary)},kn.prototype.visitInterval=On,kn.prototype.visitIntervalDayTime=On,kn.prototype.visitIntervalYearMonth=On,kn.prototype.visitFixedSizeList=function(t,e){return t===e||wn(t,e)&&t.listSize===e.listSize&&t.children.length===e.children.length&&Ln.compareFields(t.children,e.children)},kn.prototype.visitMap=function(t,e){return t===e||wn(t,e)&&t.keysSorted===e.keysSorted&&t.children.length===e.children.length&&Ln.compareFields(t.children,e.children)};var Dn,Ln=new kn,Fn=function(){function t(){F(this,t)}return E(t,[{key:"typeId",get:function(){return Je.NONE}},{key:"compareTo",value:function(t){return Ln.visit(this,t)}}],[{key:"isNull",value:function(t){return t&&t.typeId===Je.Null}},{key:"isInt",value:function(t){return t&&t.typeId===Je.Int}},{key:"isFloat",value:function(t){return t&&t.typeId===Je.Float}},{key:"isBinary",value:function(t){return t&&t.typeId===Je.Binary}},{key:"isUtf8",value:function(t){return t&&t.typeId===Je.Utf8}},{key:"isBool",value:function(t){return t&&t.typeId===Je.Bool}},{key:"isDecimal",value:function(t){return t&&t.typeId===Je.Decimal}},{key:"isDate",value:function(t){return t&&t.typeId===Je.Date}},{key:"isTime",value:function(t){return t&&t.typeId===Je.Time}},{key:"isTimestamp",value:function(t){return t&&t.typeId===Je.Timestamp}},{key:"isInterval",value:function(t){return t&&t.typeId===Je.Interval}},{key:"isList",value:function(t){return t&&t.typeId===Je.List}},{key:"isStruct",value:function(t){return t&&t.typeId===Je.Struct}},{key:"isUnion",value:function(t){return t&&t.typeId===Je.Union}},{key:"isFixedSizeBinary",value:function(t){return t&&t.typeId===Je.FixedSizeBinary}},{key:"isFixedSizeList",value:function(t){return t&&t.typeId===Je.FixedSizeList}},{key:"isMap",value:function(t){return t&&t.typeId===Je.Map}},{key:"isDictionary",value:function(t){return t&&t.typeId===Je.Dictionary}}]),t}();Fn[Symbol.toStringTag]=((Dn=Fn.prototype).children=null,Dn.ArrayType=Array,Dn[Symbol.toStringTag]="DataType");var Mn=function(t){ot(n,t);var e=yt(n);function n(){return F(this,n),e.apply(this,arguments)}return E(n,[{key:"toString",value:function(){return"Null"}},{key:"typeId",get:function(){return Je.Null}}]),n}(Fn);Mn[Symbol.toStringTag]=function(t){return t[Symbol.toStringTag]="Null"}(Mn.prototype);var En=function(t){ot(n,t);var e=yt(n);function n(t,r){var i;return F(this,n),(i=e.call(this)).isSigned=t,i.bitWidth=r,i}return E(n,[{key:"typeId",get:function(){return Je.Int}},{key:"ArrayType",get:function(){switch(this.bitWidth){case 8:return this.isSigned?Int8Array:Uint8Array;case 16:return this.isSigned?Int16Array:Uint16Array;case 32:case 64:return this.isSigned?Int32Array:Uint32Array}throw new Error("Unrecognized ".concat(this[Symbol.toStringTag]," type"))}},{key:"toString",value:function(){return"".concat(this.isSigned?"I":"Ui","nt").concat(this.bitWidth)}}]),n}(Fn);En[Symbol.toStringTag]=function(t){return t.isSigned=null,t.bitWidth=null,t[Symbol.toStringTag]="Int"}(En.prototype);var Un=function(t){ot(n,t);var e=yt(n);function n(){return F(this,n),e.call(this,!0,8)}return E(n)}(En),Nn=function(t){ot(n,t);var e=yt(n);function n(){return F(this,n),e.call(this,!0,16)}return E(n)}(En),Cn=function(t){ot(n,t);var e=yt(n);function n(){return F(this,n),e.call(this,!0,32)}return E(n)}(En),Vn=function(t){ot(n,t);var e=yt(n);function n(){return F(this,n),e.call(this,!0,64)}return E(n)}(En),jn=function(t){ot(n,t);var e=yt(n);function n(){return F(this,n),e.call(this,!1,8)}return E(n)}(En),Rn=function(t){ot(n,t);var e=yt(n);function n(){return F(this,n),e.call(this,!1,16)}return E(n)}(En),Pn=function(t){ot(n,t);var e=yt(n);function n(){return F(this,n),e.call(this,!1,32)}return E(n)}(En),zn=function(t){ot(n,t);var e=yt(n);function n(){return F(this,n),e.call(this,!1,64)}return E(n)}(En);Object.defineProperty(Un.prototype,"ArrayType",{value:Int8Array}),Object.defineProperty(Nn.prototype,"ArrayType",{value:Int16Array}),Object.defineProperty(Cn.prototype,"ArrayType",{value:Int32Array}),Object.defineProperty(Vn.prototype,"ArrayType",{value:Int32Array}),Object.defineProperty(jn.prototype,"ArrayType",{value:Uint8Array}),Object.defineProperty(Rn.prototype,"ArrayType",{value:Uint16Array}),Object.defineProperty(Pn.prototype,"ArrayType",{value:Uint32Array}),Object.defineProperty(zn.prototype,"ArrayType",{value:Uint32Array});var Yn=function(t){ot(n,t);var e=yt(n);function n(t){var r;return F(this,n),(r=e.call(this)).precision=t,r}return E(n,[{key:"typeId",get:function(){return Je.Float}},{key:"ArrayType",get:function(){switch(this.precision){case tn.HALF:return Uint16Array;case tn.SINGLE:return Float32Array;case tn.DOUBLE:return Float64Array}throw new Error("Unrecognized ".concat(this[Symbol.toStringTag]," type"))}},{key:"toString",value:function(){return"Float".concat(this.precision<<5||16)}}]),n}(Fn);Yn[Symbol.toStringTag]=function(t){return t.precision=null,t[Symbol.toStringTag]="Float"}(Yn.prototype);var Wn=function(t){ot(n,t);var e=yt(n);function n(){return F(this,n),e.call(this,tn.HALF)}return E(n)}(Yn),Hn=function(t){ot(n,t);var e=yt(n);function n(){return F(this,n),e.call(this,tn.SINGLE)}return E(n)}(Yn),$n=function(t){ot(n,t);var e=yt(n);function n(){return F(this,n),e.call(this,tn.DOUBLE)}return E(n)}(Yn);Object.defineProperty(Wn.prototype,"ArrayType",{value:Uint16Array}),Object.defineProperty(Hn.prototype,"ArrayType",{value:Float32Array}),Object.defineProperty($n.prototype,"ArrayType",{value:Float64Array});var Kn=function(t){ot(n,t);var e=yt(n);function n(){return F(this,n),e.call(this)}return E(n,[{key:"typeId",get:function(){return Je.Binary}},{key:"toString",value:function(){return"Binary"}}]),n}(Fn);Kn[Symbol.toStringTag]=function(t){return t.ArrayType=Uint8Array,t[Symbol.toStringTag]="Binary"}(Kn.prototype);var Gn=function(t){ot(n,t);var e=yt(n);function n(){return F(this,n),e.call(this)}return E(n,[{key:"typeId",get:function(){return Je.Utf8}},{key:"toString",value:function(){return"Utf8"}}]),n}(Fn);Gn[Symbol.toStringTag]=function(t){return t.ArrayType=Uint8Array,t[Symbol.toStringTag]="Utf8"}(Gn.prototype);var qn=function(t){ot(n,t);var e=yt(n);function n(){return F(this,n),e.call(this)}return E(n,[{key:"typeId",get:function(){return Je.Bool}},{key:"toString",value:function(){return"Bool"}}]),n}(Fn);qn[Symbol.toStringTag]=function(t){return t.ArrayType=Uint8Array,t[Symbol.toStringTag]="Bool"}(qn.prototype);var Jn=function(t){ot(n,t);var e=yt(n);function n(t,r){var i;return F(this,n),(i=e.call(this)).scale=t,i.precision=r,i}return E(n,[{key:"typeId",get:function(){return Je.Decimal}},{key:"toString",value:function(){return"Decimal[".concat(this.precision,"e").concat(this.scale>0?"+":"").concat(this.scale,"]")}}]),n}(Fn);Jn[Symbol.toStringTag]=function(t){return t.scale=null,t.precision=null,t.ArrayType=Uint32Array,t[Symbol.toStringTag]="Decimal"}(Jn.prototype);var Zn=function(t){ot(n,t);var e=yt(n);function n(t){var r;return F(this,n),(r=e.call(this)).unit=t,r}return E(n,[{key:"typeId",get:function(){return Je.Date}},{key:"toString",value:function(){return"Date".concat(32*(this.unit+1),"<").concat(Qe[this.unit],">")}}]),n}(Fn);Zn[Symbol.toStringTag]=function(t){return t.unit=null,t.ArrayType=Int32Array,t[Symbol.toStringTag]="Date"}(Zn.prototype);var Qn=function(t){ot(n,t);var e=yt(n);function n(){return F(this,n),e.call(this,Qe.DAY)}return E(n)}(Zn),Xn=function(t){ot(n,t);var e=yt(n);function n(){return F(this,n),e.call(this,Qe.MILLISECOND)}return E(n)}(Zn),tr=function(t){ot(n,t);var e=yt(n);function n(t,r){var i;return F(this,n),(i=e.call(this)).unit=t,i.bitWidth=r,i}return E(n,[{key:"typeId",get:function(){return Je.Time}},{key:"toString",value:function(){return"Time".concat(this.bitWidth,"<").concat(Xe[this.unit],">")}}]),n}(Fn);tr[Symbol.toStringTag]=function(t){return t.unit=null,t.bitWidth=null,t.ArrayType=Int32Array,t[Symbol.toStringTag]="Time"}(tr.prototype);var er=function(t){ot(n,t);var e=yt(n);function n(t,r){var i;return F(this,n),(i=e.call(this)).unit=t,i.timezone=r,i}return E(n,[{key:"typeId",get:function(){return Je.Timestamp}},{key:"toString",value:function(){return"Timestamp<".concat(Xe[this.unit]).concat(this.timezone?", ".concat(this.timezone):"",">")}}]),n}(Fn);er[Symbol.toStringTag]=function(t){return t.unit=null,t.timezone=null,t.ArrayType=Int32Array,t[Symbol.toStringTag]="Timestamp"}(er.prototype);var nr=function(t){ot(n,t);var e=yt(n);function n(t){var r;return F(this,n),(r=e.call(this)).unit=t,r}return E(n,[{key:"typeId",get:function(){return Je.Interval}},{key:"toString",value:function(){return"Interval<".concat(nn[this.unit],">")}}]),n}(Fn);nr[Symbol.toStringTag]=function(t){return t.unit=null,t.ArrayType=Int32Array,t[Symbol.toStringTag]="Interval"}(nr.prototype);var rr=function(t){ot(n,t);var e=yt(n);function n(t){var r;return F(this,n),(r=e.call(this)).children=[t],r}return E(n,[{key:"typeId",get:function(){return Je.List}},{key:"toString",value:function(){return"List<".concat(this.valueType,">")}},{key:"valueType",get:function(){return this.children[0].type}},{key:"valueField",get:function(){return this.children[0]}},{key:"ArrayType",get:function(){return this.valueType.ArrayType}}]),n}(Fn);rr[Symbol.toStringTag]=function(t){return t.children=null,t[Symbol.toStringTag]="List"}(rr.prototype);var ir=function(t){ot(n,t);var e=yt(n);function n(t){var r;return F(this,n),(r=e.call(this)).children=t,r}return E(n,[{key:"typeId",get:function(){return Je.Struct}},{key:"toString",value:function(){return"Struct<{".concat(this.children.map((function(t){return"".concat(t.name,":").concat(t.type)})).join(", "),"}>")}}]),n}(Fn);ir[Symbol.toStringTag]=function(t){return t.children=null,t[Symbol.toStringTag]="Struct"}(ir.prototype);var ar=function(t){ot(n,t);var e=yt(n);function n(t,r,i){var a;return F(this,n),(a=e.call(this)).mode=t,a.children=i,a.typeIds=r=Int32Array.from(r),a.typeIdToChildIndex=r.reduce((function(t,e,n){return(t[e]=n)&&t||t}),Object.create(null)),a}return E(n,[{key:"typeId",get:function(){return Je.Union}},{key:"toString",value:function(){return"".concat(this[Symbol.toStringTag],"<").concat(this.children.map((function(t){return"".concat(t.type)})).join(" | "),">")}}]),n}(Fn);ar[Symbol.toStringTag]=function(t){return t.mode=null,t.typeIds=null,t.children=null,t.typeIdToChildIndex=null,t.ArrayType=Int8Array,t[Symbol.toStringTag]="Union"}(ar.prototype);var or=function(t){ot(n,t);var e=yt(n);function n(t){var r;return F(this,n),(r=e.call(this)).byteWidth=t,r}return E(n,[{key:"typeId",get:function(){return Je.FixedSizeBinary}},{key:"toString",value:function(){return"FixedSizeBinary[".concat(this.byteWidth,"]")}}]),n}(Fn);or[Symbol.toStringTag]=function(t){return t.byteWidth=null,t.ArrayType=Uint8Array,t[Symbol.toStringTag]="FixedSizeBinary"}(or.prototype);var ur=function(t){ot(n,t);var e=yt(n);function n(t,r){var i;return F(this,n),(i=e.call(this)).listSize=t,i.children=[r],i}return E(n,[{key:"typeId",get:function(){return Je.FixedSizeList}},{key:"valueType",get:function(){return this.children[0].type}},{key:"valueField",get:function(){return this.children[0]}},{key:"ArrayType",get:function(){return this.valueType.ArrayType}},{key:"toString",value:function(){return"FixedSizeList[".concat(this.listSize,"]<").concat(this.valueType,">")}}]),n}(Fn);ur[Symbol.toStringTag]=function(t){return t.children=null,t.listSize=null,t[Symbol.toStringTag]="FixedSizeList"}(ur.prototype);var sr=function(t){ot(n,t);var e=yt(n);function n(t){var r,i=arguments.length>1&&void 0!==arguments[1]&&arguments[1];return F(this,n),(r=e.call(this)).children=[t],r.keysSorted=i,r}return E(n,[{key:"typeId",get:function(){return Je.Map}},{key:"keyType",get:function(){return this.children[0].type.children[0].type}},{key:"valueType",get:function(){return this.children[0].type.children[1].type}},{key:"toString",value:function(){return"Map<{".concat(this.children[0].type.children.map((function(t){return"".concat(t.name,":").concat(t.type)})).join(", "),"}>")}}]),n}(Fn);sr[Symbol.toStringTag]=function(t){return t.children=null,t.keysSorted=null,t[Symbol.toStringTag]="Map_"}(sr.prototype);var cr,fr=(cr=-1,function(){return++cr}),lr=function(t){ot(n,t);var e=yt(n);function n(t,r,i,a){var o;return F(this,n),(o=e.call(this)).indices=r,o.dictionary=t,o.isOrdered=a||!1,o.id=null==i?fr():"number"===typeof i?i:i.low,o}return E(n,[{key:"typeId",get:function(){return Je.Dictionary}},{key:"children",get:function(){return this.dictionary.children}},{key:"valueType",get:function(){return this.dictionary}},{key:"ArrayType",get:function(){return this.dictionary.ArrayType}},{key:"toString",value:function(){return"Dictionary<".concat(this.indices,", ").concat(this.dictionary,">")}}]),n}(Fn);function hr(t){var e=t;switch(t.typeId){case Je.Decimal:return 4;case Je.Timestamp:return 2;case Je.Date:case Je.Interval:return 1+e.unit;case Je.Int:case Je.Time:return+(e.bitWidth>32)+1;case Je.FixedSizeList:return e.listSize;case Je.FixedSizeBinary:return e.byteWidth;default:return 1}}lr[Symbol.toStringTag]=function(t){return t.id=null,t.indices=null,t.isOrdered=null,t.dictionary=null,t[Symbol.toStringTag]="Dictionary"}(lr.prototype);var yr=function(){function t(e,n,r,i,a,o,u){var s;F(this,t),this.type=e,this.dictionary=u,this.offset=Math.floor(Math.max(n||0,0)),this.length=Math.floor(Math.max(r||0,0)),this._nullCount=Math.floor(Math.max(i||0,-1)),this.childData=(o||[]).map((function(e){return e instanceof t?e:e.data})),a instanceof t?(this.stride=a.stride,this.values=a.values,this.typeIds=a.typeIds,this.nullBitmap=a.nullBitmap,this.valueOffsets=a.valueOffsets):(this.stride=hr(e),a&&((s=a[0])&&(this.valueOffsets=s),(s=a[1])&&(this.values=s),(s=a[2])&&(this.nullBitmap=s),(s=a[3])&&(this.typeIds=s)))}return E(t,[{key:"typeId",get:function(){return this.type.typeId}},{key:"ArrayType",get:function(){return this.type.ArrayType}},{key:"buffers",get:function(){return[this.valueOffsets,this.values,this.nullBitmap,this.typeIds]}},{key:"byteLength",get:function(){var t=0,e=this.valueOffsets,n=this.values,r=this.nullBitmap,i=this.typeIds;return e&&(t+=e.byteLength),n&&(t+=n.byteLength),r&&(t+=r.byteLength),i&&(t+=i.byteLength),this.childData.reduce((function(t,e){return t+e.byteLength}),t)}},{key:"nullCount",get:function(){var t,e=this._nullCount;return e<=-1&&(t=this.nullBitmap)&&(this._nullCount=e=this.length-yn(t,this.offset,this.offset+this.length)),e}},{key:"clone",value:function(e){var n=arguments.length>1&&void 0!==arguments[1]?arguments[1]:this.offset,r=arguments.length>2&&void 0!==arguments[2]?arguments[2]:this.length,i=arguments.length>3&&void 0!==arguments[3]?arguments[3]:this._nullCount,a=arguments.length>4&&void 0!==arguments[4]?arguments[4]:this,o=arguments.length>5&&void 0!==arguments[5]?arguments[5]:this.childData;return new t(e,n,r,i,a,o,this.dictionary)}},{key:"slice",value:function(t,e){var n=this.stride,r=this.typeId,i=this.childData,a=+(0===this._nullCount)-1,o=16===r?n:1,u=this._sliceBuffers(t,e,n,r);return this.clone(this.type,this.offset+t,e,a,u,!i.length||this.valueOffsets?i:this._sliceChildren(i,o*t,o*e))}},{key:"_changeLengthAndBackfillNullBitmap",value:function(t){if(this.typeId===Je.Null)return this.clone(this.type,0,t,0);var e=this.length,n=this.nullCount,r=new Uint8Array((t+63&-64)>>3).fill(255,0,e>>3);r[e>>3]=(1<0&&r.set(fn(this.offset,e,this.nullBitmap),0);var i=this.buffers;return i[Ze.VALIDITY]=r,this.clone(this.type,0,t,n+(t-e),i)}},{key:"_sliceBuffers",value:function(t,e,n,r){var i,a=this.buffers;return(i=a[Ze.TYPE])&&(a[Ze.TYPE]=i.subarray(t,t+e)),(i=a[Ze.OFFSET])&&(a[Ze.OFFSET]=i.subarray(t,t+e+1))||(i=a[Ze.DATA])&&(a[Ze.DATA]=6===r?i:i.subarray(n*t,n*(t+e))),a}},{key:"_sliceChildren",value:function(t,e,n){return t.map((function(t){return t.slice(e,n)}))}}],[{key:"new",value:function(e,n,r,i,a,o,u){switch(a instanceof t?a=a.buffers:a||(a=[]),e.typeId){case Je.Null:return t.Null(e,n,r);case Je.Int:return t.Int(e,n,r,i||0,a[Ze.VALIDITY],a[Ze.DATA]||[]);case Je.Dictionary:return t.Dictionary(e,n,r,i||0,a[Ze.VALIDITY],a[Ze.DATA]||[],u);case Je.Float:return t.Float(e,n,r,i||0,a[Ze.VALIDITY],a[Ze.DATA]||[]);case Je.Bool:return t.Bool(e,n,r,i||0,a[Ze.VALIDITY],a[Ze.DATA]||[]);case Je.Decimal:return t.Decimal(e,n,r,i||0,a[Ze.VALIDITY],a[Ze.DATA]||[]);case Je.Date:return t.Date(e,n,r,i||0,a[Ze.VALIDITY],a[Ze.DATA]||[]);case Je.Time:return t.Time(e,n,r,i||0,a[Ze.VALIDITY],a[Ze.DATA]||[]);case Je.Timestamp:return t.Timestamp(e,n,r,i||0,a[Ze.VALIDITY],a[Ze.DATA]||[]);case Je.Interval:return t.Interval(e,n,r,i||0,a[Ze.VALIDITY],a[Ze.DATA]||[]);case Je.FixedSizeBinary:return t.FixedSizeBinary(e,n,r,i||0,a[Ze.VALIDITY],a[Ze.DATA]||[]);case Je.Binary:return t.Binary(e,n,r,i||0,a[Ze.VALIDITY],a[Ze.OFFSET]||[],a[Ze.DATA]||[]);case Je.Utf8:return t.Utf8(e,n,r,i||0,a[Ze.VALIDITY],a[Ze.OFFSET]||[],a[Ze.DATA]||[]);case Je.List:return t.List(e,n,r,i||0,a[Ze.VALIDITY],a[Ze.OFFSET]||[],(o||[])[0]);case Je.FixedSizeList:return t.FixedSizeList(e,n,r,i||0,a[Ze.VALIDITY],(o||[])[0]);case Je.Struct:return t.Struct(e,n,r,i||0,a[Ze.VALIDITY],o||[]);case Je.Map:return t.Map(e,n,r,i||0,a[Ze.VALIDITY],a[Ze.OFFSET]||[],(o||[])[0]);case Je.Union:return t.Union(e,n,r,i||0,a[Ze.VALIDITY],a[Ze.TYPE]||[],a[Ze.OFFSET]||o,o)}throw new Error("Unrecognized typeId ".concat(e.typeId))}},{key:"Null",value:function(e,n,r){return new t(e,n,r,0)}},{key:"Int",value:function(e,n,r,i,a,o){return new t(e,n,r,i,[void 0,Ht(e.ArrayType,o),Jt(a)])}},{key:"Dictionary",value:function(e,n,r,i,a,o,u){return new t(e,n,r,i,[void 0,Ht(e.indices.ArrayType,o),Jt(a)],[],u)}},{key:"Float",value:function(e,n,r,i,a,o){return new t(e,n,r,i,[void 0,Ht(e.ArrayType,o),Jt(a)])}},{key:"Bool",value:function(e,n,r,i,a,o){return new t(e,n,r,i,[void 0,Ht(e.ArrayType,o),Jt(a)])}},{key:"Decimal",value:function(e,n,r,i,a,o){return new t(e,n,r,i,[void 0,Ht(e.ArrayType,o),Jt(a)])}},{key:"Date",value:function(e,n,r,i,a,o){return new t(e,n,r,i,[void 0,Ht(e.ArrayType,o),Jt(a)])}},{key:"Time",value:function(e,n,r,i,a,o){return new t(e,n,r,i,[void 0,Ht(e.ArrayType,o),Jt(a)])}},{key:"Timestamp",value:function(e,n,r,i,a,o){return new t(e,n,r,i,[void 0,Ht(e.ArrayType,o),Jt(a)])}},{key:"Interval",value:function(e,n,r,i,a,o){return new t(e,n,r,i,[void 0,Ht(e.ArrayType,o),Jt(a)])}},{key:"FixedSizeBinary",value:function(e,n,r,i,a,o){return new t(e,n,r,i,[void 0,Ht(e.ArrayType,o),Jt(a)])}},{key:"Binary",value:function(e,n,r,i,a,o,u){return new t(e,n,r,i,[Gt(o),Jt(u),Jt(a)])}},{key:"Utf8",value:function(e,n,r,i,a,o,u){return new t(e,n,r,i,[Gt(o),Jt(u),Jt(a)])}},{key:"List",value:function(e,n,r,i,a,o,u){return new t(e,n,r,i,[Gt(o),void 0,Jt(a)],[u])}},{key:"FixedSizeList",value:function(e,n,r,i,a,o){return new t(e,n,r,i,[void 0,void 0,Jt(a)],[o])}},{key:"Struct",value:function(e,n,r,i,a,o){return new t(e,n,r,i,[void 0,void 0,Jt(a)],o)}},{key:"Map",value:function(e,n,r,i,a,o,u){return new t(e,n,r,i,[Gt(o),void 0,Jt(a)],[u])}},{key:"Union",value:function(e,n,r,i,a,o,u,s){var c=[void 0,void 0,Jt(a),Ht(e.ArrayType,o)];return e.mode===en.Sparse?new t(e,n,r,i,c,u):(c[Ze.OFFSET]=Gt(u),new t(e,n,r,i,c,s))}}]),t}();yr.prototype.childData=Object.freeze([]);function pr(t){if(null===t)return"null";if(undefined===t)return"undefined";switch(typeof t){case"number":case"bigint":return"".concat(t);case"string":return'"'.concat(t,'"')}return"function"===typeof t[Symbol.toPrimitive]?t[Symbol.toPrimitive]("string"):ArrayBuffer.isView(t)?"[".concat(t,"]"):JSON.stringify(t)}function dr(t){if(!t||t.length<=0)return function(t){return!0};var e="",n=t.filter((function(t){return t===t}));return n.length>0&&(e="\n switch (x) {".concat(n.map((function(t){return"\n case ".concat(function(t){if("bigint"!==typeof t)return pr(t);if(kt)return"".concat(pr(t),"n");return'"'.concat(pr(t),'"')}(t),":")})).join(""),"\n return false;\n }")),t.length!==n.length&&(e="if (x !== x) return false;\n".concat(e)),new Function("x","".concat(e,"\nreturn true;"))}var vr=function(t,e){return(t*e+63&-64||64)/e},br=function(t){var e=arguments.length>1&&void 0!==arguments[1]?arguments[1]:0;return t.length>=e?t.subarray(0,e):Yt(new t.constructor(e),t,0)},gr=function(){function t(e){var n=arguments.length>1&&void 0!==arguments[1]?arguments[1]:1;F(this,t),this.buffer=e,this.stride=n,this.BYTES_PER_ELEMENT=e.BYTES_PER_ELEMENT,this.ArrayType=e.constructor,this._resize(this.length=e.length/n|0)}return E(t,[{key:"byteLength",get:function(){return this.length*this.stride*this.BYTES_PER_ELEMENT|0}},{key:"reservedLength",get:function(){return this.buffer.length/this.stride}},{key:"reservedByteLength",get:function(){return this.buffer.byteLength}},{key:"set",value:function(t,e){return this}},{key:"append",value:function(t){return this.set(this.length,t)}},{key:"reserve",value:function(t){if(t>0){this.length+=t;var e=this.stride,n=this.length*e,r=this.buffer.length;n>=r&&this._resize(vr(0===r?1*n:2*n,this.BYTES_PER_ELEMENT))}return this}},{key:"flush",value:function(){var t=arguments.length>0&&void 0!==arguments[0]?arguments[0]:this.length;t=vr(t*this.stride,this.BYTES_PER_ELEMENT);var e=br(this.buffer,t);return this.clear(),e}},{key:"clear",value:function(){return this.length=0,this._resize(0),this}},{key:"_resize",value:function(t){return this.buffer=Yt(new this.ArrayType(t),this.buffer)}}]),t}();gr.prototype.offset=0;var mr=function(t){ot(n,t);var e=yt(n);function n(){return F(this,n),e.apply(this,arguments)}return E(n,[{key:"last",value:function(){return this.get(this.length-1)}},{key:"get",value:function(t){return this.buffer[t]}},{key:"set",value:function(t,e){return this.reserve(t-this.length+1),this.buffer[t*this.stride]=e,this}}]),n}(gr),kr=function(t){ot(n,t);var e=yt(n);function n(){var t,r=arguments.length>0&&void 0!==arguments[0]?arguments[0]:new Uint8Array(0);return F(this,n),(t=e.call(this,r,1/8)).numValid=0,t}return E(n,[{key:"numInvalid",get:function(){return this.length-this.numValid}},{key:"get",value:function(t){return this.buffer[t>>3]>>t%8&1}},{key:"set",value:function(t,e){var n=this.reserve(t-this.length+1).buffer,r=t>>3,i=t%8,a=n[r]>>i&1;return e?0===a&&(n[r]|=1<0&&void 0!==arguments[0]?arguments[0]:new Int32Array(1);return F(this,n),e.call(this,t,1)}return E(n,[{key:"append",value:function(t){return this.set(this.length-1,t)}},{key:"set",value:function(t,e){var n=this.length-1,r=this.reserve(t-n+1).buffer;return n0&&void 0!==arguments[0]?arguments[0]:this.length-1;return t>this.length&&this.set(t-1,0),ze(ut(n.prototype),"flush",this).call(this,t+1)}}]),n}(mr),_r=function(t){ot(n,t);var e=yt(n);function n(){return F(this,n),e.apply(this,arguments)}return E(n,[{key:"ArrayType64",get:function(){return this._ArrayType64||(this._ArrayType64=this.buffer instanceof Int32Array?_t:St)}},{key:"set",value:function(t,e){switch(this.reserve(t-this.length+1),typeof e){case"bigint":this.buffer64[t]=e;break;case"number":this.buffer[t*this.stride]=e;break;default:this.buffer.set(e,t*this.stride)}return this}},{key:"_resize",value:function(t){var e=ze(ut(n.prototype),"_resize",this).call(this,t),r=e.byteLength/(this.BYTES_PER_ELEMENT*this.stride);return kt&&(this.buffer64=new this.ArrayType64(e.buffer,e.byteOffset,r)),e}}]),n}(gr),Ir=function(){function t(e){var n=e.type,r=e.nullValues;F(this,t),this.length=0,this.finished=!1,this.type=n,this.children=[],this.nullValues=r,this.stride=hr(n),this._nulls=new kr,r&&r.length>0&&(this._isValid=dr(r))}return E(t,[{key:"toVector",value:function(){return qe.new(this.flush())}},{key:"ArrayType",get:function(){return this.type.ArrayType}},{key:"nullCount",get:function(){return this._nulls.numInvalid}},{key:"numChildren",get:function(){return this.children.length}},{key:"byteLength",get:function(){var t=0;return this._offsets&&(t+=this._offsets.byteLength),this._values&&(t+=this._values.byteLength),this._nulls&&(t+=this._nulls.byteLength),this._typeIds&&(t+=this._typeIds.byteLength),this.children.reduce((function(t,e){return t+e.byteLength}),t)}},{key:"reservedLength",get:function(){return this._nulls.reservedLength}},{key:"reservedByteLength",get:function(){var t=0;return this._offsets&&(t+=this._offsets.reservedByteLength),this._values&&(t+=this._values.reservedByteLength),this._nulls&&(t+=this._nulls.reservedByteLength),this._typeIds&&(t+=this._typeIds.reservedByteLength),this.children.reduce((function(t,e){return t+e.reservedByteLength}),t)}},{key:"valueOffsets",get:function(){return this._offsets?this._offsets.buffer:null}},{key:"values",get:function(){return this._values?this._values.buffer:null}},{key:"nullBitmap",get:function(){return this._nulls?this._nulls.buffer:null}},{key:"typeIds",get:function(){return this._typeIds?this._typeIds.buffer:null}},{key:"append",value:function(t){return this.set(this.length,t)}},{key:"isValid",value:function(t){return this._isValid(t)}},{key:"set",value:function(t,e){return this.setValid(t,this.isValid(e))&&this.setValue(t,e),this}},{key:"setValue",value:function(t,e){this._setValue(this,t,e)}},{key:"setValid",value:function(t,e){return this.length=this._nulls.set(t,+e).length,e}},{key:"addChild",value:function(t){arguments.length>1&&void 0!==arguments[1]||"".concat(this.numChildren);throw new Error('Cannot append children to non-nested type "'.concat(this.type,'"'))}},{key:"getChildAt",value:function(t){return this.children[t]||null}},{key:"flush",value:function(){var t=[],e=this._values,n=this._offsets,r=this._typeIds,i=this.length,a=this.nullCount;r?(t[Ze.TYPE]=r.flush(i),n&&(t[Ze.OFFSET]=n.flush(i))):n?(e&&(t[Ze.DATA]=e.flush(n.last())),t[Ze.OFFSET]=n.flush(i)):e&&(t[Ze.DATA]=e.flush(i)),a>0&&(t[Ze.VALIDITY]=this._nulls.flush(i));var o=yr.new(this.type,0,i,a,t,this.children.map((function(t){return t.flush()})));return this.clear(),o}},{key:"finish",value:function(){return this.finished=!0,this.children.forEach((function(t){return t.finish()})),this}},{key:"clear",value:function(){return this.length=0,this._offsets&&this._offsets.clear(),this._values&&this._values.clear(),this._nulls&&this._nulls.clear(),this._typeIds&&this._typeIds.clear(),this.children.forEach((function(t){return t.clear()})),this}}],[{key:"new",value:function(t){}},{key:"throughNode",value:function(t){throw new Error('"throughNode" not available in this environment')}},{key:"throughDOM",value:function(t){throw new Error('"throughDOM" not available in this environment')}},{key:"throughIterable",value:function(t){return function(t){var e=t.queueingStrategy,n=void 0===e?"count":e,r=t.highWaterMark,i=void 0===r?"bytes"!==n?1e3:Math.pow(2,14):r,a="bytes"!==n?"length":"byteLength";return R.mark((function e(n){var r,o,u,s,c;return R.wrap((function(e){for(;;)switch(e.prev=e.next){case 0:r=0,o=Ir.new(t),u=O(n),e.prev=3,u.s();case 5:if((s=u.n()).done){e.next=14;break}if(c=s.value,!(o.append(c)[a]>=i)){e.next=12;break}if(e.t0=++r,!e.t0){e.next=12;break}return e.next=12,o.toVector();case 12:e.next=5;break;case 14:e.next=19;break;case 16:e.prev=16,e.t1=e.catch(3),u.e(e.t1);case 19:return e.prev=19,u.f(),e.finish(19);case 22:if(!(o.finish().length>0||0===r)){e.next=25;break}return e.next=25,o.toVector();case 25:case"end":return e.stop()}}),e,null,[[3,16,19,22]])}))}(t)}},{key:"throughAsyncIterable",value:function(t){return function(t){var e=t.queueingStrategy,n=void 0===e?"count":e,r=t.highWaterMark,i=void 0===r?"bytes"!==n?1e3:Math.pow(2,14):r,a="bytes"!==n?"length":"byteLength";return function(){var e=j(R.mark((function e(n){var r,o,u,s,c,f,l,h;return R.wrap((function(e){for(;;)switch(e.prev=e.next){case 0:r=0,o=Ir.new(t),u=!1,s=!1,e.prev=4,f=P(n);case 6:return e.next=8,C(f.next());case 8:if(!(u=!(l=e.sent).done)){e.next=18;break}if(h=l.value,!(o.append(h)[a]>=i)){e.next=15;break}if(e.t0=++r,!e.t0){e.next=15;break}return e.next=15,o.toVector();case 15:u=!1,e.next=6;break;case 18:e.next=24;break;case 20:e.prev=20,e.t1=e.catch(4),s=!0,c=e.t1;case 24:if(e.prev=24,e.prev=25,!u||null==f.return){e.next=29;break}return e.next=29,C(f.return());case 29:if(e.prev=29,!s){e.next=32;break}throw c;case 32:return e.finish(29);case 33:return e.finish(24);case 34:if(!(o.finish().length>0||0===r)){e.next=37;break}return e.next=37,o.toVector();case 37:case"end":return e.stop()}}),e,null,[[4,20,24,34],[25,,29,33]])})));return function(t){return e.apply(this,arguments)}}()}(t)}}]),t}();Ir.prototype.length=1,Ir.prototype.stride=1,Ir.prototype.children=null,Ir.prototype.finished=!1,Ir.prototype.nullValues=null,Ir.prototype._isValid=function(){return!0};var Sr=function(t){ot(n,t);var e=yt(n);function n(t){var r;return F(this,n),(r=e.call(this,t))._values=new mr(new r.ArrayType(0),r.stride),r}return E(n,[{key:"setValue",value:function(t,e){var r=this._values;return r.reserve(t-r.length+1),ze(ut(n.prototype),"setValue",this).call(this,t,e)}}]),n}(Ir),xr=function(t){ot(n,t);var e=yt(n);function n(t){var r;return F(this,n),(r=e.call(this,t))._pendingLength=0,r._offsets=new wr,r}return E(n,[{key:"setValue",value:function(t,e){var n=this._pending||(this._pending=new Map),r=n.get(t);r&&(this._pendingLength-=r.length),this._pendingLength+=e.length,n.set(t,e)}},{key:"setValid",value:function(t,e){return!!ze(ut(n.prototype),"setValid",this).call(this,t,e)||((this._pending||(this._pending=new Map)).set(t,void 0),!1)}},{key:"clear",value:function(){return this._pendingLength=0,this._pending=void 0,ze(ut(n.prototype),"clear",this).call(this)}},{key:"flush",value:function(){return this._flush(),ze(ut(n.prototype),"flush",this).call(this)}},{key:"finish",value:function(){return this._flush(),ze(ut(n.prototype),"finish",this).call(this)}},{key:"_flush",value:function(){var t=this._pending,e=this._pendingLength;return this._pendingLength=0,this._pending=void 0,t&&t.size>0&&this._flushPending(t,e),this}}]),n}(Ir);var Ar=function(t){ot(n,t);var e=yt(n);function n(t){var r;return F(this,n),(r=e.call(this,t))._values=new kr,r}return E(n,[{key:"setValue",value:function(t,e){this._values.set(t,+e)}}]),n}(Ir),Tr=function(t){ot(n,t);var e=yt(n);function n(){return F(this,n),e.apply(this,arguments)}return E(n,[{key:"setValue",value:function(t,e){}},{key:"setValid",value:function(t,e){return this.length=Math.max(t+1,this.length),e}}]),n}(Ir),Br=function(t){ot(n,t);var e=yt(n);function n(){return F(this,n),e.apply(this,arguments)}return E(n)}(Sr),Or=function(t){ot(n,t);var e=yt(n);function n(){return F(this,n),e.apply(this,arguments)}return E(n)}(Br),Dr=function(t){ot(n,t);var e=yt(n);function n(){return F(this,n),e.apply(this,arguments)}return E(n)}(Br),Lr=function(t){ot(n,t);var e=yt(n);function n(){return F(this,n),e.apply(this,arguments)}return E(n)}(Sr),Fr=function(t){ot(n,t);var e=yt(n);function n(t){var r,i=t.type,a=t.nullValues,o=t.dictionaryHashFunction;return F(this,n),(r=e.call(this,{type:new lr(i.dictionary,i.indices,i.id,i.isOrdered)}))._nulls=null,r._dictionaryOffset=0,r._keysToIndices=Object.create(null),r.indices=Ir.new({type:r.type.indices,nullValues:a}),r.dictionary=Ir.new({type:r.type.dictionary,nullValues:null}),"function"===typeof o&&(r.valueToKey=o),r}return E(n,[{key:"values",get:function(){return this.indices.values}},{key:"nullCount",get:function(){return this.indices.nullCount}},{key:"nullBitmap",get:function(){return this.indices.nullBitmap}},{key:"byteLength",get:function(){return this.indices.byteLength+this.dictionary.byteLength}},{key:"reservedLength",get:function(){return this.indices.reservedLength+this.dictionary.reservedLength}},{key:"reservedByteLength",get:function(){return this.indices.reservedByteLength+this.dictionary.reservedByteLength}},{key:"isValid",value:function(t){return this.indices.isValid(t)}},{key:"setValid",value:function(t,e){var n=this.indices;return e=n.setValid(t,e),this.length=n.length,e}},{key:"setValue",value:function(t,e){var n=this._keysToIndices,r=this.valueToKey(e),i=n[r];return void 0===i&&(n[r]=i=this._dictionaryOffset+this.dictionary.append(e).length-1),this.indices.setValue(t,i)}},{key:"flush",value:function(){var t=this.type,e=this._dictionary,n=this.dictionary.toVector(),r=this.indices.flush().clone(t);return r.dictionary=e?e.concat(n):n,this.finished||(this._dictionaryOffset+=n.length),this._dictionary=r.dictionary,this.clear(),r}},{key:"finish",value:function(){return this.indices.finish(),this.dictionary.finish(),this._dictionaryOffset=0,this._keysToIndices=Object.create(null),ze(ut(n.prototype),"finish",this).call(this)}},{key:"clear",value:function(){return this.indices.clear(),this.dictionary.clear(),ze(ut(n.prototype),"clear",this).call(this)}},{key:"valueToKey",value:function(t){return"string"===typeof t?t:"".concat(t)}}]),n}(Ir),Mr=function(t){ot(n,t);var e=yt(n);function n(){return F(this,n),e.apply(this,arguments)}return E(n)}(Sr),Er=new Float64Array(1),Ur=new Uint32Array(Er.buffer);function Nr(t){var e=(31744&t)>>10,n=(1023&t)/1024,r=Math.pow(-1,(32768&t)>>15);switch(e){case 31:return r*(n?NaN:1/0);case 0:return r*(n?6103515625e-14*n:0)}return r*Math.pow(2,e-15)*(1+n)}function Cr(t){if(t!==t)return 32256;Er[0]=t;var e=(2147483648&Ur[1])>>16&65535,n=2146435072&Ur[1],r=0;return n>=1089470464?Ur[0]>0?n=31744:(n=(2080374784&n)>>16,r=(1048575&Ur[1])>>10):n<=1056964608?(r=1048576+((r=1048576+(1048575&Ur[1]))<<(n>>20)-998)>>21,n=0):(n=n-1056964608>>10,r=512+(1048575&Ur[1])>>10),e|n|65535&r}var Vr=function(t){ot(n,t);var e=yt(n);function n(){return F(this,n),e.apply(this,arguments)}return E(n)}(Sr),jr=function(t){ot(n,t);var e=yt(n);function n(){return F(this,n),e.apply(this,arguments)}return E(n,[{key:"setValue",value:function(t,e){this._values.set(t,Cr(e))}}]),n}(Vr),Rr=function(t){ot(n,t);var e=yt(n);function n(){return F(this,n),e.apply(this,arguments)}return E(n,[{key:"setValue",value:function(t,e){this._values.set(t,e)}}]),n}(Vr),Pr=function(t){ot(n,t);var e=yt(n);function n(){return F(this,n),e.apply(this,arguments)}return E(n,[{key:"setValue",value:function(t,e){this._values.set(t,e)}}]),n}(Vr);function zr(t,e,n){return zr=st()?Reflect.construct:function(t,e,n){var r=[null];r.push.apply(r,e);var i=new(Function.bind.apply(t,r));return n&&at(i,n.prototype),i},zr.apply(null,arguments)}var Yr,Wr,Hr=Symbol.for("isArrowBigNum");function $r(t){for(var e=arguments.length,n=new Array(e>1?e-1:0),r=1;r>>=0),s+=(n>>>0)+e*Math.pow(c,32);return s}function Zr(t){var e="",n=new Uint32Array(2),r=new Uint16Array(t.buffer,t.byteOffset,t.byteLength/2),i=new Uint32Array((r=new Uint16Array(r).reverse()).buffer),a=-1,o=r.length-1;do{for(n[0]=r[a=0];a0&&void 0!==arguments[0]?arguments[0]:"default";switch(t){case"number":return Jr(this);case"string":return Yr(this);case"default":return Wr(this)}return Yr(this)},Object.setPrototypeOf(Kr.prototype,Object.create(Int32Array.prototype)),Object.setPrototypeOf(Gr.prototype,Object.create(Uint32Array.prototype)),Object.setPrototypeOf(qr.prototype,Object.create(Uint32Array.prototype)),Object.assign(Kr.prototype,$r.prototype,{constructor:Kr,signed:!0,TypedArray:Int32Array,BigIntArray:_t}),Object.assign(Gr.prototype,$r.prototype,{constructor:Gr,signed:!1,TypedArray:Uint32Array,BigIntArray:St}),Object.assign(qr.prototype,$r.prototype,{constructor:qr,signed:!0,TypedArray:Uint32Array,BigIntArray:St}),kt?(Wr=function(t){return 8===t.byteLength?new t.BigIntArray(t.buffer,t.byteOffset,1)[0]:Zr(t)},Yr=function(t){return 8===t.byteLength?"".concat(new t.BigIntArray(t.buffer,t.byteOffset,1)[0]):Zr(t)}):Wr=Yr=Zr;var Qr,Xr=function(){function t(e,n){return F(this,t),t.new(e,n)}return E(t,null,[{key:"new",value:function(t,e){switch(e){case!0:return new Kr(t);case!1:return new Gr(t)}switch(t.constructor){case Int8Array:case Int16Array:case Int32Array:case _t:return new Kr(t)}return 16===t.byteLength?new qr(t):new Gr(t)}},{key:"signed",value:function(t){return new Kr(t)}},{key:"unsigned",value:function(t){return new Gr(t)}},{key:"decimal",value:function(t){return new qr(t)}}]),t}(),ti=function(t){ot(n,t);var e=yt(n);function n(){return F(this,n),e.apply(this,arguments)}return E(n,[{key:"setValue",value:function(t,e){this._values.set(t,e)}}]),n}(Sr),ei=function(t){ot(n,t);var e=yt(n);function n(){return F(this,n),e.apply(this,arguments)}return E(n)}(ti),ni=function(t){ot(n,t);var e=yt(n);function n(){return F(this,n),e.apply(this,arguments)}return E(n)}(ti),ri=function(t){ot(n,t);var e=yt(n);function n(){return F(this,n),e.apply(this,arguments)}return E(n)}(ti),ii=function(t){ot(n,t);var e=yt(n);function n(t){var r;return F(this,n),t.nullValues&&(t.nullValues=t.nullValues.map(ci)),(r=e.call(this,t))._values=new _r(new Int32Array(0),2),r}return E(n,[{key:"values64",get:function(){return this._values.buffer64}},{key:"isValid",value:function(t){return ze(ut(n.prototype),"isValid",this).call(this,ci(t))}}]),n}(ti),ai=function(t){ot(n,t);var e=yt(n);function n(){return F(this,n),e.apply(this,arguments)}return E(n)}(ti),oi=function(t){ot(n,t);var e=yt(n);function n(){return F(this,n),e.apply(this,arguments)}return E(n)}(ti),ui=function(t){ot(n,t);var e=yt(n);function n(){return F(this,n),e.apply(this,arguments)}return E(n)}(ti),si=function(t){ot(n,t);var e=yt(n);function n(t){var r;return F(this,n),t.nullValues&&(t.nullValues=t.nullValues.map(ci)),(r=e.call(this,t))._values=new _r(new Uint32Array(0),2),r}return E(n,[{key:"values64",get:function(){return this._values.buffer64}},{key:"isValid",value:function(t){return ze(ut(n.prototype),"isValid",this).call(this,ci(t))}}]),n}(ti),ci=(Qr={BigIntArray:_t},function(t){return ArrayBuffer.isView(t)&&(Qr.buffer=t.buffer,Qr.byteOffset=t.byteOffset,Qr.byteLength=t.byteLength,t=Wr(Qr),Qr.buffer=null),t}),fi=function(t){ot(n,t);var e=yt(n);function n(){return F(this,n),e.apply(this,arguments)}return E(n)}(Sr),li=function(t){ot(n,t);var e=yt(n);function n(){return F(this,n),e.apply(this,arguments)}return E(n)}(fi),hi=function(t){ot(n,t);var e=yt(n);function n(){return F(this,n),e.apply(this,arguments)}return E(n)}(fi),yi=function(t){ot(n,t);var e=yt(n);function n(){return F(this,n),e.apply(this,arguments)}return E(n)}(fi),pi=function(t){ot(n,t);var e=yt(n);function n(){return F(this,n),e.apply(this,arguments)}return E(n)}(fi),di=function(t){ot(n,t);var e=yt(n);function n(){return F(this,n),e.apply(this,arguments)}return E(n)}(Sr),vi=function(t){ot(n,t);var e=yt(n);function n(){return F(this,n),e.apply(this,arguments)}return E(n)}(di),bi=function(t){ot(n,t);var e=yt(n);function n(){return F(this,n),e.apply(this,arguments)}return E(n)}(di),gi=function(t){ot(n,t);var e=yt(n);function n(){return F(this,n),e.apply(this,arguments)}return E(n)}(di),mi=function(t){ot(n,t);var e=yt(n);function n(){return F(this,n),e.apply(this,arguments)}return E(n)}(di),ki=function(t){ot(n,t);var e=yt(n);function n(){return F(this,n),e.apply(this,arguments)}return E(n)}(Sr),wi=function(t){ot(n,t);var e=yt(n);function n(){return F(this,n),e.apply(this,arguments)}return E(n)}(ki),_i=function(t){ot(n,t);var e=yt(n);function n(){return F(this,n),e.apply(this,arguments)}return E(n)}(ki),Ii=function(t){ot(n,t);var e=yt(n);function n(t){var r;return F(this,n),(r=e.call(this,t))._values=new gr(new Uint8Array(0)),r}return E(n,[{key:"byteLength",get:function(){var t=this._pendingLength+4*this.length;return this._offsets&&(t+=this._offsets.byteLength),this._values&&(t+=this._values.byteLength),this._nulls&&(t+=this._nulls.byteLength),t}},{key:"setValue",value:function(t,e){return ze(ut(n.prototype),"setValue",this).call(this,t,Jt(e))}},{key:"_flushPending",value:function(t,e){var n,r,i=this._offsets,a=this._values.reserve(e).buffer,o=0,u=0,s=0,c=O(t);try{for(c.s();!(r=c.n()).done;){var f=U(r.value,2);o=f[0],void 0===(n=f[1])?i.set(o,0):(u=n.length,a.set(n,s),i.set(o,u),s+=u)}}catch(l){c.e(l)}finally{c.f()}}}]),n}(xr),Si=function(t){ot(n,t);var e=yt(n);function n(t){var r;return F(this,n),(r=e.call(this,t))._values=new gr(new Uint8Array(0)),r}return E(n,[{key:"byteLength",get:function(){var t=this._pendingLength+4*this.length;return this._offsets&&(t+=this._offsets.byteLength),this._values&&(t+=this._values.byteLength),this._nulls&&(t+=this._nulls.byteLength),t}},{key:"setValue",value:function(t,e){return ze(ut(n.prototype),"setValue",this).call(this,t,it(e))}},{key:"_flushPending",value:function(t,e){}}]),n}(xr);Si.prototype._flushPending=Ii.prototype._flushPending;var xi=function(){function t(){F(this,t)}return E(t,[{key:"length",get:function(){return this._values.length}},{key:"get",value:function(t){return this._values[t]}},{key:"clear",value:function(){return this._values=null,this}},{key:"bind",value:function(t){return t instanceof qe?t:(this._values=t,this)}}]),t}(),Ai=Symbol.for("parent"),Ti=Symbol.for("rowIndex"),Bi=Symbol.for("keyToIdx"),Oi=Symbol.for("idxToVal"),Di=Symbol.for("nodejs.util.inspect.custom"),Li=function(t){function e(t,n){F(this,e),this[Ai]=t,this.size=n}return E(e,[{key:"entries",value:function(){return this[Symbol.iterator]()}},{key:"has",value:function(t){return void 0!==this.get(t)}},{key:"get",value:function(t){var e=void 0;if(null!==t&&void 0!==t){var n=this[Bi]||(this[Bi]=new Map),r=n.get(t);if(void 0!==r){var i=this[Oi]||(this[Oi]=new Array(this.size));void 0!==(e=i[r])||(i[r]=e=this.getValue(r))}else if((r=this.getIndex(t))>-1){n.set(t,r);var a=this[Oi]||(this[Oi]=new Array(this.size));void 0!==(e=a[r])||(a[r]=e=this.getValue(r))}}return e}},{key:"set",value:function(t,e){if(null!==t&&void 0!==t){var n=this[Bi]||(this[Bi]=new Map),r=n.get(t);if(void 0===r&&n.set(t,r=this.getIndex(t)),r>-1)(this[Oi]||(this[Oi]=new Array(this.size)))[r]=this.setValue(r,e)}return this}},{key:"clear",value:function(){throw new Error("Clearing ".concat(this[Symbol.toStringTag]," not supported."))}},{key:"delete",value:function(t){throw new Error("Deleting ".concat(this[Symbol.toStringTag]," values not supported."))}},{key:Symbol.iterator,value:R.mark((function t(){var e,n,r,i,a,o,u,s,c;return R.wrap((function(t){for(;;)switch(t.prev=t.next){case 0:e=this.keys(),n=this.values(),r=this[Bi]||(this[Bi]=new Map),i=this[Oi]||(this[Oi]=new Array(this.size)),u=0;case 5:if((s=e.next()).done||(c=n.next()).done){t.next=15;break}return a=s.value,o=c.value,i[u]=o,r.has(a)||r.set(a,u),t.next=12,[a,o];case 12:++u,t.next=5;break;case 15:case"end":return t.stop()}}),t,this)}))},{key:"forEach",value:function(t,e){for(var n,r,i,a,o=this.keys(),u=this.values(),s=void 0===e?t:function(n,r,i){return t.call(e,n,r,i)},c=this[Bi]||(this[Bi]=new Map),f=this[Oi]||(this[Oi]=new Array(this.size)),l=0;!(i=o.next()).done&&!(a=u.next()).done;++l)n=i.value,r=a.value,f[l]=r,c.has(n)||c.set(n,l),s(r,n,this)}},{key:"toArray",value:function(){return vn(this.values())}},{key:"toJSON",value:function(){var t={};return this.forEach((function(e,n){return t[n]=e})),t}},{key:"inspect",value:function(){return this.toString()}},{key:Di,value:function(){return this.toString()}},{key:"toString",value:function(){var t=[];return this.forEach((function(e,n){n=pr(n),e=pr(e),t.push("".concat(n,": ").concat(e))})),"{ ".concat(t.join(", ")," }")}}]),e}();Li[Symbol.toStringTag]=function(t){var e;return Object.defineProperties(t,(Ve(e={size:{writable:!0,enumerable:!1,configurable:!1,value:0}},Ai,{writable:!0,enumerable:!1,configurable:!1,value:null}),Ve(e,Ti,{writable:!0,enumerable:!1,configurable:!1,value:-1}),e)),t[Symbol.toStringTag]="Row"}(Li.prototype);var Fi=function(t){ot(n,t);var e=yt(n);function n(t){var r;return F(this,n),ht(r=e.call(this,t,t.length),Ni(lt(r)))}return E(n,[{key:"keys",value:function(){return this[Ai].getChildAt(0)[Symbol.iterator]()}},{key:"values",value:function(){return this[Ai].getChildAt(1)[Symbol.iterator]()}},{key:"getKey",value:function(t){return this[Ai].getChildAt(0).get(t)}},{key:"getIndex",value:function(t){return this[Ai].getChildAt(0).indexOf(t)}},{key:"getValue",value:function(t){return this[Ai].getChildAt(1).get(t)}},{key:"setValue",value:function(t,e){this[Ai].getChildAt(1).set(t,e)}}]),n}(Li),Mi=function(t){ot(n,t);var e=yt(n);function n(t){var r;return F(this,n),ht(r=e.call(this,t,t.type.children.length),Ui(lt(r)))}return E(n,[{key:"keys",value:R.mark((function t(){var e,n,r;return R.wrap((function(t){for(;;)switch(t.prev=t.next){case 0:e=O(this[Ai].type.children),t.prev=1,e.s();case 3:if((n=e.n()).done){t.next=9;break}return r=n.value,t.next=7,r.name;case 7:t.next=3;break;case 9:t.next=14;break;case 11:t.prev=11,t.t0=t.catch(1),e.e(t.t0);case 14:return t.prev=14,e.f(),t.finish(14);case 17:case"end":return t.stop()}}),t,this,[[1,11,14,17]])}))},{key:"values",value:R.mark((function t(){var e,n,r;return R.wrap((function(t){for(;;)switch(t.prev=t.next){case 0:e=O(this[Ai].type.children),t.prev=1,e.s();case 3:if((n=e.n()).done){t.next=9;break}return r=n.value,t.next=7,this[r.name];case 7:t.next=3;break;case 9:t.next=14;break;case 11:t.prev=11,t.t0=t.catch(1),e.e(t.t0);case 14:return t.prev=14,e.f(),t.finish(14);case 17:case"end":return t.stop()}}),t,this,[[1,11,14,17]])}))},{key:"getKey",value:function(t){return this[Ai].type.children[t].name}},{key:"getIndex",value:function(t){return this[Ai].type.children.findIndex((function(e){return e.name===t}))}},{key:"getValue",value:function(t){return this[Ai].getChildAt(t).get(this[Ti])}},{key:"setValue",value:function(t,e){return this[Ai].getChildAt(t).set(this[Ti],e)}}]),n}(Li);Object.setPrototypeOf(Li.prototype,Map.prototype);var Ei,Ui=function(){var t={enumerable:!0,configurable:!1,get:null,set:null};return function(e){var n,r=-1,i=e[Bi]||(e[Bi]=new Map),a=function(t){return function(){return this.get(t)}},o=function(t){return function(e){return this.set(t,e)}},u=O(e.keys());try{for(u.s();!(n=u.n()).done;){var s=n.value;i.set(s,++r),t.get=a(s),t.set=o(s),e.hasOwnProperty(s)||(t.enumerable=!0,Object.defineProperty(e,s,t)),e.hasOwnProperty(r)||(t.enumerable=!1,Object.defineProperty(e,r,t))}}catch(c){u.e(c)}finally{u.f()}return t.get=t.set=null,e}}(),Ni=function(){if("undefined"===typeof Proxy)return Ui;var t=Li.prototype.has,e=Li.prototype.get,n=Li.prototype.set,r=Li.prototype.getKey,i={isExtensible:function(){return!1},deleteProperty:function(){return!1},preventExtensions:function(){return!0},ownKeys:function(t){return vn(t.keys()).map((function(t){return"".concat(t)}))},has:function(t,e){switch(e){case"getKey":case"getIndex":case"getValue":case"setValue":case"toArray":case"toJSON":case"inspect":case"constructor":case"isPrototypeOf":case"propertyIsEnumerable":case"toString":case"toLocaleString":case"valueOf":case"size":case"has":case"get":case"set":case"clear":case"delete":case"keys":case"values":case"entries":case"forEach":case"__proto__":case"__defineGetter__":case"__defineSetter__":case"hasOwnProperty":case"__lookupGetter__":case"__lookupSetter__":case Symbol.iterator:case Symbol.toStringTag:case Ai:case Ti:case Oi:case Bi:case Di:return!0}return"number"!==typeof e||t.has(e)||(e=t.getKey(e)),t.has(e)},get:function(n,i,a){switch(i){case"getKey":case"getIndex":case"getValue":case"setValue":case"toArray":case"toJSON":case"inspect":case"constructor":case"isPrototypeOf":case"propertyIsEnumerable":case"toString":case"toLocaleString":case"valueOf":case"size":case"has":case"get":case"set":case"clear":case"delete":case"keys":case"values":case"entries":case"forEach":case"__proto__":case"__defineGetter__":case"__defineSetter__":case"hasOwnProperty":case"__lookupGetter__":case"__lookupSetter__":case Symbol.iterator:case Symbol.toStringTag:case Ai:case Ti:case Oi:case Bi:case Di:return Reflect.get(n,i,a)}return"number"!==typeof i||t.call(a,i)||(i=r.call(a,i)),e.call(a,i)},set:function(e,i,a,o){switch(i){case Ai:case Ti:case Oi:case Bi:return Reflect.set(e,i,a,o);case"getKey":case"getIndex":case"getValue":case"setValue":case"toArray":case"toJSON":case"inspect":case"constructor":case"isPrototypeOf":case"propertyIsEnumerable":case"toString":case"toLocaleString":case"valueOf":case"size":case"has":case"get":case"set":case"clear":case"delete":case"keys":case"values":case"entries":case"forEach":case"__proto__":case"__defineGetter__":case"__defineSetter__":case"hasOwnProperty":case"__lookupGetter__":case"__lookupSetter__":case Symbol.iterator:case Symbol.toStringTag:return!1}return"number"!==typeof i||t.call(o,i)||(i=r.call(o,i)),!!t.call(o,i)&&!!n.call(o,i,a)}};return function(t){return new Proxy(t,i)}}();function Ci(t,e,n){var r=t.length,i=e>-1?e:r+e%r;return n?n(t,i):i}function Vi(t,e,n,r){var i=t.length,a=void 0===i?0:i,o="number"!==typeof e?0:e,u="number"!==typeof n?a:n;return o<0&&(o=(o%a+a)%a),u<0&&(u=(u%a+a)%a),ua&&(u=a),r?r(t,o,u):[o,u]}var ji=kt?mt(0):0,Ri=function(t){return t!==t};function Pi(t){var e=typeof t;if("object"!==e||null===t)return Ri(t)?Ri:"bigint"!==e?function(e){return e===t}:function(e){return ji+e===t};if(t instanceof Date){var n=t.valueOf();return function(t){return t instanceof Date&&t.valueOf()===n}}return ArrayBuffer.isView(t)?function(e){return!!e&&Ae(t,e)}:t instanceof Map?function(t){var e=-1,n=[];return t.forEach((function(t){return n[++e]=Pi(t)})),zi(n)}(t):Array.isArray(t)?function(t){for(var e=[],n=-1,r=t.length;++n1&&void 0!==arguments[1]?arguments[1]:[],a=arguments.length>2&&void 0!==arguments[2]?arguments[2]:Hi(i);return F(this,r),(e=n.call(this))._nullCount=-1,e._type=t,e._chunks=i,e._chunkOffsets=a,e._length=a[a.length-1],e._numChildren=(e._type.children||[]).length,e}return E(r,[{key:"type",get:function(){return this._type}},{key:"length",get:function(){return this._length}},{key:"chunks",get:function(){return this._chunks}},{key:"typeId",get:function(){return this._type.typeId}},{key:"VectorName",get:function(){return"Chunked<".concat(this._type,">")}},{key:"data",get:function(){return this._chunks[0]?this._chunks[0].data:null}},{key:"ArrayType",get:function(){return this._type.ArrayType}},{key:"numChildren",get:function(){return this._numChildren}},{key:"stride",get:function(){return this._chunks[0]?this._chunks[0].stride:1}},{key:"byteLength",get:function(){return this._chunks.reduce((function(t,e){return t+e.byteLength}),0)}},{key:"nullCount",get:function(){var t=this._nullCount;return t<0&&(this._nullCount=t=this._chunks.reduce((function(t,e){return t+e.nullCount}),0)),t}},{key:"indices",get:function(){if(Fn.isDictionary(this._type)){if(!this._indices){var t=this._chunks;this._indices=1===t.length?t[0].indices:r.concat.apply(r,vn(t.map((function(t){return t.indices}))))}return this._indices}return null}},{key:"dictionary",get:function(){return Fn.isDictionary(this._type)?this._chunks[this._chunks.length-1].data.dictionary:null}},{key:e,value:R.mark((function t(){var e,n,r;return R.wrap((function(t){for(;;)switch(t.prev=t.next){case 0:e=O(this._chunks),t.prev=1,e.s();case 3:if((n=e.n()).done){t.next=8;break}return r=n.value,t.delegateYield(r,"t0",6);case 6:t.next=3;break;case 8:t.next=13;break;case 10:t.prev=10,t.t1=t.catch(1),e.e(t.t1);case 13:return t.prev=13,e.f(),t.finish(13);case 16:case"end":return t.stop()}}),t,this,[[1,10,13,16]])}))},{key:"clone",value:function(){var t=arguments.length>0&&void 0!==arguments[0]?arguments[0]:this._chunks;return new r(this._type,t)}},{key:"concat",value:function(){for(var t=arguments.length,e=new Array(t),n=0;n=this._numChildren)return null;var e,n,i,a=this._children||(this._children=[]);return(e=a[t])?e:(n=(this._type.children||[])[t])&&(i=this._chunks.map((function(e){return e.getChildAt(t)})).filter((function(t){return null!=t}))).length>0?a[t]=new r(n.type,i):null}},{key:"search",value:function(t,e){var n=t,r=this._chunkOffsets,i=r.length-1;if(n<0)return null;if(n>=r[i])return null;if(i<=1)return e?e(this,0,n):[0,n];var a=0,o=0,u=0;do{if(a+1===i)return e?e(this,a,n-o):[a,n-o];n>=r[u=a+(i-a)/2|0]?a=u:i=u}while(n=(o=r[a]));return null}},{key:"isValid",value:function(t){return!!this.search(t,this.isValidInternal)}},{key:"get",value:function(t){return this.search(t,this.getInternal)}},{key:"set",value:function(t,e){this.search(t,(function(t,n,r){return t.chunks[n].set(r,e)}))}},{key:"indexOf",value:function(t,e){var n=this;return e&&"number"===typeof e?this.search(e,(function(e,r,i){return n.indexOfInternal(e,r,i,t)})):this.indexOfInternal(this,0,Math.max(0,e||0),t)}},{key:"toArray",value:function(){var t=this.chunks,e=t.length,n=this._type.ArrayType;if(e<=0)return new n(0);if(e<=1)return t[0].toArray();for(var r=0,i=new Array(e),a=-1;++a=n)break;if(!(e>=f+c))if(f>=e&&f+c<=n)r.push(s);else{var l=Math.max(0,e-f),h=Math.min(n-f,c);r.push(s.slice(l,h))}}return t.clone(r)}}],[{key:"flatten",value:function(){for(var t=arguments.length,e=new Array(t),n=0;n1&&void 0!==arguments[1]?arguments[1]:[],a=arguments.length>2?arguments[2]:void 0;return F(this,n),i=Wi.flatten.apply(Wi,vn(i)),(r=e.call(this,t.type,i,a))._field=t,1!==i.length||lt(r)instanceof qi?r:ht(r,new qi(t,i[0],r._chunkOffsets))}return E(n,[{key:"field",get:function(){return this._field}},{key:"name",get:function(){return this._field.name}},{key:"nullable",get:function(){return this._field.nullable}},{key:"metadata",get:function(){return this._field.metadata}},{key:"clone",value:function(){var t=arguments.length>0&&void 0!==arguments[0]?arguments[0]:this._chunks;return new n(this._field,t)}},{key:"getChildAt",value:function(t){if(t<0||t>=this.numChildren)return null;var e,r,i,a=this._children||(this._children=[]);return(e=a[t])?e:(r=(this.type.children||[])[t])&&(i=this._chunks.map((function(e){return e.getChildAt(t)})).filter((function(t){return null!=t}))).length>0?a[t]=new n(r,i):null}}],[{key:"new",value:function(t,e){for(var r=arguments.length,i=new Array(r>2?r-2:0),a=2;a0}))&&(t=t.clone({nullable:!0}));return new n(t,o)}}]),n}(Wi),qi=function(t){ot(n,t);var e=yt(n);function n(t,r,i){var a;return F(this,n),(a=e.call(this,t,[r],i))._chunk=r,a}return E(n,[{key:"search",value:function(t,e){return e?e(this,0,t):[0,t]}},{key:"isValid",value:function(t){return this._chunk.isValid(t)}},{key:"get",value:function(t){return this._chunk.get(t)}},{key:"set",value:function(t,e){this._chunk.set(t,e)}},{key:"indexOf",value:function(t,e){return this._chunk.indexOf(t,e)}}]),n}(Gi),Ji=Array.isArray,Zi=function(t,e){return na(t,e,[],0)},Qi=function(t){var e=U(oa(t,[[],[]]),2),n=e[0];return e[1].map((function(t,e){return t instanceof Gi?Gi.new(t.field.clone(n[e]),t):t instanceof qe?Gi.new(n[e],t):Gi.new(n[e],[])}))},Xi=function(t){return oa(t,[[],[]])},ta=function(t,e){return ra(t,e,[],0)},ea=function(t,e){return ia(t,e,[],0)};function na(t,e,n,r){for(var i,a=r,o=-1,u=e.length;++o0&&void 0!==arguments[0]?arguments[0]:[],n=arguments.length>1?arguments[1]:void 0,r=arguments.length>2?arguments[2]:void 0;F(this,e),this.fields=t||[],this.metadata=n||new Map,r||(r=fa(t)),this.dictionaries=r}return E(e,[{key:Symbol.toStringTag,get:function(){return"Schema"}},{key:"toString",value:function(){return"Schema<{ ".concat(this.fields.map((function(t,e){return"".concat(e,": ").concat(t)})).join(", ")," }>")}},{key:"compareTo",value:function(t){return Ln.compareSchemas(this,t)}},{key:"select",value:function(){for(var t=arguments.length,n=new Array(t),r=0;r2&&void 0!==arguments[2]&&arguments[2],i=arguments.length>3?arguments[3]:void 0;F(this,e),this.name=t,this.type=n,this.nullable=r,this.metadata=i||new Map}return E(e,[{key:"typeId",get:function(){return this.type.typeId}},{key:Symbol.toStringTag,get:function(){return"Field"}},{key:"toString",value:function(){return"".concat(this.name,": ").concat(this.type)}},{key:"compareTo",value:function(t){return Ln.compareField(this,t)}},{key:"clone",value:function(){for(var t,n,r,i,a,o,u,s,c,f,l=arguments.length,h=new Array(l),y=0;y1&&void 0!==arguments[1]?arguments[1]:new Map,n=-1,r=t.length;++n0&&fa(a.children,e)}return e}ua.prototype.fields=null,ua.prototype.metadata=null,ua.prototype.dictionaries=null,sa.prototype.type=null,sa.prototype.name=null,sa.prototype.nullable=null,sa.prototype.metadata=null;var la=function(t){ot(n,t);var e=yt(n);function n(t){var r;return F(this,n),(r=e.call(this,t))._run=new xi,r._offsets=new wr,r}return E(n,[{key:"addChild",value:function(t){var e=arguments.length>1&&void 0!==arguments[1]?arguments[1]:"0";if(this.numChildren>0)throw new Error("ListBuilder can only have one child.");return this.children[this.numChildren]=t,this.type=new rr(new sa(e,t.type,!0)),this.numChildren-1}},{key:"clear",value:function(){return this._run.clear(),ze(ut(n.prototype),"clear",this).call(this)}},{key:"_flushPending",value:function(t){var e,n,r=this._run,i=this._offsets,a=this._setValue,o=0,u=O(t);try{for(u.s();!(n=u.n()).done;){var s=U(n.value,2);o=s[0],void 0===(e=s[1])?i.set(o,0):(i.set(o,e.length),a(this,o,r.bind(e)))}}catch(c){u.e(c)}finally{u.f()}}}]),n}(xr),ha=function(t){ot(n,t);var e=yt(n);function n(){var t;return F(this,n),(t=e.apply(this,arguments))._run=new xi,t}return E(n,[{key:"setValue",value:function(t,e){ze(ut(n.prototype),"setValue",this).call(this,t,this._run.bind(e))}},{key:"addChild",value:function(t){var e=arguments.length>1&&void 0!==arguments[1]?arguments[1]:"0";if(this.numChildren>0)throw new Error("FixedSizeListBuilder can only have one child.");var n=this.children.push(t);return this.type=new ur(this.type.listSize,new sa(e,t.type,!0)),n}},{key:"clear",value:function(){return this._run.clear(),ze(ut(n.prototype),"clear",this).call(this)}}]),n}(Ir),ya=function(t){ot(n,t);var e=yt(n);function n(){return F(this,n),e.apply(this,arguments)}return E(n,[{key:"set",value:function(t,e){return ze(ut(n.prototype),"set",this).call(this,t,e)}},{key:"setValue",value:function(t,e){e=e instanceof Map?e:new Map(Object.entries(e));var n=this._pending||(this._pending=new Map),r=n.get(t);r&&(this._pendingLength-=r.size),this._pendingLength+=e.size,n.set(t,e)}},{key:"addChild",value:function(t){var e=arguments.length>1&&void 0!==arguments[1]?arguments[1]:"".concat(this.numChildren);if(this.numChildren>0)throw new Error("ListBuilder can only have one child.");return this.children[this.numChildren]=t,this.type=new sr(new sa(e,t.type,!0),this.type.keysSorted),this.numChildren-1}},{key:"_flushPending",value:function(t){var e=this,n=this._offsets,r=this._setValue;t.forEach((function(t,i){void 0===t?n.set(i,0):(n.set(i,t.size),r(e,i,t))}))}}]),n}(xr),pa=function(t){ot(n,t);var e=yt(n);function n(){return F(this,n),e.apply(this,arguments)}return E(n,[{key:"addChild",value:function(t){var e=arguments.length>1&&void 0!==arguments[1]?arguments[1]:"".concat(this.numChildren),n=this.children.push(t);return this.type=new ir([].concat(vn(this.type.children),[new sa(e,t.type,!0)])),n}}]),n}(Ir),da=function(t){ot(n,t);var e=yt(n);function n(t){var r;return F(this,n),(r=e.call(this,t))._typeIds=new mr(new Int8Array(0),1),"function"===typeof t.valueToChildTypeId&&(r._valueToChildTypeId=t.valueToChildTypeId),r}return E(n,[{key:"typeIdToChildIndex",get:function(){return this.type.typeIdToChildIndex}},{key:"append",value:function(t,e){return this.set(this.length,t,e)}},{key:"set",value:function(t,e,n){return void 0===n&&(n=this._valueToChildTypeId(this,e,t)),this.setValid(t,this.isValid(e))&&this.setValue(t,e,n),this}},{key:"setValue",value:function(t,e,r){this._typeIds.set(t,r),ze(ut(n.prototype),"setValue",this).call(this,t,e)}},{key:"addChild",value:function(t){var e=arguments.length>1&&void 0!==arguments[1]?arguments[1]:"".concat(this.children.length),n=this.children.push(t),r=this.type,i=r.children,a=r.mode,o=r.typeIds,u=[].concat(vn(i),[new sa(e,t.type)]);return this.type=new ar(a,[].concat(vn(o),[n]),u),n}},{key:"_valueToChildTypeId",value:function(t,e,n){throw new Error("Cannot map UnionBuilder value to child typeId. Pass the `childTypeId` as the second argument to unionBuilder.append(), or supply a `valueToChildTypeId` function as part of the UnionBuilder constructor options.")}}]),n}(Ir),va=function(t){ot(n,t);var e=yt(n);function n(){return F(this,n),e.apply(this,arguments)}return E(n)}(da),ba=function(t){ot(n,t);var e=yt(n);function n(t){var r;return F(this,n),(r=e.call(this,t))._offsets=new mr(new Int32Array(0)),r}return E(n,[{key:"setValue",value:function(t,e,r){var i=this.type.typeIdToChildIndex[r];return this._offsets.set(t,this.getChildAt(i).length),ze(ut(n.prototype),"setValue",this).call(this,t,e,r)}}]),n}(da),ga=function(t){ot(n,t);var e=yt(n);function n(){return F(this,n),e.apply(this,arguments)}return E(n)}(bn),ma=function(t,e,n){t[e]=n%4294967296|0,t[e+1]=n/4294967296|0},ka=function(t,e,n,r){var i=e[n],a=e[n+1];null!=i&&null!=a&&t.set(r.subarray(0,a-i),i)},wa=function(t,e,n){!function(t,e,n){t[e]=n/864e5|0}(t.values,e,n.valueOf())},_a=function(t,e,n){var r=t.values;ma(r,2*e,n.valueOf())},Ia=function(t,e,n){var r=t.stride;t.values[r*e]=n},Sa=function(t,e,n){var r=t.stride;t.values[r*e]=Cr(n)},xa=function(t,e,n){switch(typeof n){case"bigint":t.values64[e]=n;break;case"number":t.values[e*t.stride]=n;break;default:var r=n,i=t.stride,a=Ht(t.ArrayType,r);t.values.set(a.subarray(0,i),i*e)}},Aa=function(t,e,n){var r=t.values;return ma(r,2*e,n/1e3)},Ta=function(t,e,n){var r=t.values;return ma(r,2*e,n)},Ba=function(t,e,n){return function(t,e,n){t[e]=1e3*n%4294967296|0,t[e+1]=1e3*n/4294967296|0}(t.values,2*e,n)},Oa=function(t,e,n){return function(t,e,n){t[e]=1e6*n%4294967296|0,t[e+1]=1e6*n/4294967296|0}(t.values,2*e,n)},Da=function(t,e,n){t.values[t.stride*e]=n},La=function(t,e,n){t.values[t.stride*e]=n},Fa=function(t,e,n){t.values.set(n.subarray(0,2),2*e)},Ma=function(t,e,n){t.values.set(n.subarray(0,2),2*e)},Ea=function(t,e,n){var r=t.typeIdToChildIndex[t.typeIds[e]],i=t.getChildAt(r);i&&i.set(t.valueOffsets[e],n)},Ua=function(t,e,n){var r=t.typeIdToChildIndex[t.typeIds[e]],i=t.getChildAt(r);i&&i.set(e,n)},Na=function(t,e,n){t.values.set(n.subarray(0,2),2*e)},Ca=function(t,e,n){t.values[e]=12*n[0]+n[1]%12};ga.prototype.visitBool=function(t,e,n){var r=t.offset,i=t.values,a=r+e;n?i[a>>3]|=1<>3]&=~(1<0){var i=e.children||[],a={nullValues:e.nullValues},o=Array.isArray(i)?function(t,e){return i[e]||a}:function(t){var e=t.name;return i[e]||a};n.children.forEach((function(e,n){var i=e.type,a=o(e,n);r.children.push(t(Re(Re({},a),{},{type:i})))}))}return r},Object.keys(Je).map((function(t){return Je[t]})).filter((function(t){return"number"===typeof t&&t!==Je.NONE})).forEach((function(t){Pa.visit(t).prototype._setValue=ja.getVisitFn(t)})),Si.prototype._setValue=ja.visitBinary,function(t){!function(e){!function(e){!function(e){var n=function(){function e(){F(this,e),this.bb=null,this.bb_pos=0}return E(e,[{key:"__init",value:function(t,e){return this.bb_pos=t,this.bb=e,this}},{key:"version",value:function(){var t=this.bb.__offset(this.bb_pos,4);return t?this.bb.readInt16(this.bb_pos+t):Ye.apache.arrow.flatbuf.MetadataVersion.V1}},{key:"schema",value:function(t){var e=this.bb.__offset(this.bb_pos,6);return e?(t||new Ye.apache.arrow.flatbuf.Schema).__init(this.bb.__indirect(this.bb_pos+e),this.bb):null}},{key:"dictionaries",value:function(e,n){var r=this.bb.__offset(this.bb_pos,8);return r?(n||new t.apache.arrow.flatbuf.Block).__init(this.bb.__vector(this.bb_pos+r)+24*e,this.bb):null}},{key:"dictionariesLength",value:function(){var t=this.bb.__offset(this.bb_pos,8);return t?this.bb.__vector_len(this.bb_pos+t):0}},{key:"recordBatches",value:function(e,n){var r=this.bb.__offset(this.bb_pos,10);return r?(n||new t.apache.arrow.flatbuf.Block).__init(this.bb.__vector(this.bb_pos+r)+24*e,this.bb):null}},{key:"recordBatchesLength",value:function(){var t=this.bb.__offset(this.bb_pos,10);return t?this.bb.__vector_len(this.bb_pos+t):0}}],[{key:"getRootAsFooter",value:function(t,n){return(n||new e).__init(t.readInt32(t.position())+t.position(),t)}},{key:"startFooter",value:function(t){t.startObject(4)}},{key:"addVersion",value:function(t,e){t.addFieldInt16(0,e,Ye.apache.arrow.flatbuf.MetadataVersion.V1)}},{key:"addSchema",value:function(t,e){t.addFieldOffset(1,e,0)}},{key:"addDictionaries",value:function(t,e){t.addFieldOffset(2,e,0)}},{key:"startDictionariesVector",value:function(t,e){t.startVector(24,e,8)}},{key:"addRecordBatches",value:function(t,e){t.addFieldOffset(3,e,0)}},{key:"startRecordBatchesVector",value:function(t,e){t.startVector(24,e,8)}},{key:"endFooter",value:function(t){return t.endObject()}},{key:"finishFooterBuffer",value:function(t,e){t.finish(e)}},{key:"createFooter",value:function(t,n,r,i,a){return e.startFooter(t),e.addVersion(t,n),e.addSchema(t,r),e.addDictionaries(t,i),e.addRecordBatches(t,a),e.endFooter(t)}}]),e}();e.Footer=n}(e.flatbuf||(e.flatbuf={}))}(e.arrow||(e.arrow={}))}(t.apache||(t.apache={}))}(Va||(Va={})),function(t){!function(t){!function(t){!function(t){var e=function(){function t(){F(this,t),this.bb=null,this.bb_pos=0}return E(t,[{key:"__init",value:function(t,e){return this.bb_pos=t,this.bb=e,this}},{key:"offset",value:function(){return this.bb.readInt64(this.bb_pos)}},{key:"metaDataLength",value:function(){return this.bb.readInt32(this.bb_pos+8)}},{key:"bodyLength",value:function(){return this.bb.readInt64(this.bb_pos+16)}}],[{key:"createBlock",value:function(t,e,n,r){return t.prep(8,24),t.writeInt64(r),t.pad(4),t.writeInt32(n),t.writeInt64(e),t.offset()}}]),t}();t.Block=e}(t.flatbuf||(t.flatbuf={}))}(t.arrow||(t.arrow={}))}(t.apache||(t.apache={}))}(Va||(Va={}));var za=W.Long,Ya=W.Builder,Wa=W.ByteBuffer,Ha=Va.apache.arrow.flatbuf.Block,$a=Va.apache.arrow.flatbuf.Footer,Ka=function(){function t(e){var n=arguments.length>1&&void 0!==arguments[1]?arguments[1]:an.V4,r=arguments.length>2?arguments[2]:void 0,i=arguments.length>3?arguments[3]:void 0;F(this,t),this.schema=e,this.version=n,r&&(this._recordBatches=r),i&&(this._dictionaryBatches=i)}return E(t,[{key:"numRecordBatches",get:function(){return this._recordBatches.length}},{key:"numDictionaries",get:function(){return this._dictionaryBatches.length}},{key:"recordBatches",value:R.mark((function t(){var e,n,r;return R.wrap((function(t){for(;;)switch(t.prev=t.next){case 0:n=-1,r=this.numRecordBatches;case 1:if(!(++n=0&&t=0&&t=0&&t=0&&t0)return ze(ut(n.prototype),"write",this).call(this,t)}},{key:"toString",value:function(){var t=arguments.length>0&&void 0!==arguments[0]&&arguments[0];return t?rt(this.toUint8Array(!0)):this.toUint8Array(!1).then(rt)}},{key:"toUint8Array",value:function(){var t=this,e=arguments.length>0&&void 0!==arguments[0]&&arguments[0];return e?Wt(this._values)[0]:L(R.mark((function e(){var n,r,i,a,o,u,s,c;return R.wrap((function(e){for(;;)switch(e.prev=e.next){case 0:n=[],r=0,i=!1,a=!1,e.prev=3,u=P(t);case 5:return e.next=7,u.next();case 7:if(!(i=!(s=e.sent).done)){e.next=14;break}c=s.value,n.push(c),r+=c.byteLength;case 11:i=!1,e.next=5;break;case 14:e.next=20;break;case 16:e.prev=16,e.t0=e.catch(3),a=!0,o=e.t0;case 20:if(e.prev=20,e.prev=21,!i||null==u.return){e.next=25;break}return e.next=25,u.return();case 25:if(e.prev=25,!a){e.next=28;break}throw o;case 28:return e.finish(25);case 29:return e.finish(20);case 30:return e.abrupt("return",Wt(n,r)[0]);case 31:case"end":return e.stop()}}),e,null,[[3,16,20,30],[21,,25,29]])})))()}}]),n}(bt),Za=function(t){function e(t){F(this,e),t&&(this.source=new Xa(Be.fromIterable(t)))}return E(e,[{key:Symbol.iterator,value:function(){return this}},{key:"next",value:function(t){return this.source.next(t)}},{key:"throw",value:function(t){return this.source.throw(t)}},{key:"return",value:function(t){return this.source.return(t)}},{key:"peek",value:function(t){return this.source.peek(t)}},{key:"read",value:function(t){return this.source.read(t)}}]),e}(),Qa=function(t){function e(t){F(this,e),t instanceof e?this.source=t.source:t instanceof Ja?this.source=new to(Be.fromAsyncIterable(t)):jt(t)?this.source=new to(Be.fromNodeStream(t)):Ct(t)?this.source=new to(Be.fromDOMStream(t)):Ut(t)?this.source=new to(Be.fromDOMStream(t.body)):Dt(t)?this.source=new to(Be.fromIterable(t)):(Ot(t)||Lt(t))&&(this.source=new to(Be.fromAsyncIterable(t)))}return E(e,[{key:Symbol.asyncIterator,value:function(){return this}},{key:"next",value:function(t){return this.source.next(t)}},{key:"throw",value:function(t){return this.source.throw(t)}},{key:"return",value:function(t){return this.source.return(t)}},{key:"closed",get:function(){return this.source.closed}},{key:"cancel",value:function(t){return this.source.cancel(t)}},{key:"peek",value:function(t){return this.source.peek(t)}},{key:"read",value:function(t){return this.source.read(t)}}]),e}(),Xa=function(){function t(e){F(this,t),this.source=e}return E(t,[{key:"cancel",value:function(t){this.return(t)}},{key:"peek",value:function(t){return this.next(t,"peek").value}},{key:"read",value:function(t){return this.next(t,"read").value}},{key:"next",value:function(t){var e=arguments.length>1&&void 0!==arguments[1]?arguments[1]:"read";return this.source.next({cmd:e,size:t})}},{key:"throw",value:function(t){return Object.create(this.source.throw&&this.source.throw(t)||pt)}},{key:"return",value:function(t){return Object.create(this.source.return&&this.source.return(t)||pt)}}]),t}(),to=function(){function t(e){var n=this;F(this,t),this.source=e,this._closedPromise=new Promise((function(t){return n._closedPromiseResolve=t}))}return E(t,[{key:"cancel",value:function(){var t=L(R.mark((function t(e){return R.wrap((function(t){for(;;)switch(t.prev=t.next){case 0:return t.next=2,this.return(e);case 2:case"end":return t.stop()}}),t,this)})));return function(e){return t.apply(this,arguments)}}()},{key:"closed",get:function(){return this._closedPromise}},{key:"read",value:function(){var t=L(R.mark((function t(e){return R.wrap((function(t){for(;;)switch(t.prev=t.next){case 0:return t.next=2,this.next(e,"read");case 2:return t.abrupt("return",t.sent.value);case 3:case"end":return t.stop()}}),t,this)})));return function(e){return t.apply(this,arguments)}}()},{key:"peek",value:function(){var t=L(R.mark((function t(e){return R.wrap((function(t){for(;;)switch(t.prev=t.next){case 0:return t.next=2,this.next(e,"peek");case 2:return t.abrupt("return",t.sent.value);case 3:case"end":return t.stop()}}),t,this)})));return function(e){return t.apply(this,arguments)}}()},{key:"next",value:function(){var t=L(R.mark((function t(e){var n,r=arguments;return R.wrap((function(t){for(;;)switch(t.prev=t.next){case 0:return n=r.length>1&&void 0!==r[1]?r[1]:"read",t.next=3,this.source.next({cmd:n,size:e});case 3:return t.abrupt("return",t.sent);case 4:case"end":return t.stop()}}),t,this)})));return function(e){return t.apply(this,arguments)}}()},{key:"throw",value:function(){var t=L(R.mark((function t(e){var n;return R.wrap((function(t){for(;;)switch(t.prev=t.next){case 0:if(t.t1=this.source.throw,!t.t1){t.next=5;break}return t.next=4,this.source.throw(e);case 4:t.t1=t.sent;case 5:if(t.t0=t.t1,t.t0){t.next=8;break}t.t0=pt;case 8:return n=t.t0,this._closedPromiseResolve&&this._closedPromiseResolve(),this._closedPromiseResolve=void 0,t.abrupt("return",Object.create(n));case 12:case"end":return t.stop()}}),t,this)})));return function(e){return t.apply(this,arguments)}}()},{key:"return",value:function(){var t=L(R.mark((function t(e){var n;return R.wrap((function(t){for(;;)switch(t.prev=t.next){case 0:if(t.t1=this.source.return,!t.t1){t.next=5;break}return t.next=4,this.source.return(e);case 4:t.t1=t.sent;case 5:if(t.t0=t.t1,t.t0){t.next=8;break}t.t0=pt;case 8:return n=t.t0,this._closedPromiseResolve&&this._closedPromiseResolve(),this._closedPromiseResolve=void 0,t.abrupt("return",Object.create(n));case 12:case"end":return t.stop()}}),t,this)})));return function(e){return t.apply(this,arguments)}}()}]),t}(),eo=function(t){ot(n,t);var e=yt(n);function n(t,r){var i;return F(this,n),(i=e.call(this)).position=0,i.buffer=Jt(t),i.size="undefined"===typeof r?i.buffer.byteLength:r,i}return E(n,[{key:"readInt32",value:function(t){var e=this.readAt(t,4),n=e.buffer,r=e.byteOffset;return new DataView(n,r).getInt32(0,!0)}},{key:"seek",value:function(t){return this.position=Math.min(t,this.size),t>>16,65535&this.buffer[1],this.buffer[0]>>>16,65535&this.buffer[0]]),n=new Uint32Array([t.buffer[1]>>>16,65535&t.buffer[1],t.buffer[0]>>>16,65535&t.buffer[0]]),r=e[3]*n[3];this.buffer[0]=65535&r;var i=r>>>16;return i+=r=e[2]*n[3],i+=r=e[3]*n[2]>>>0,this.buffer[0]+=i<<16,this.buffer[1]=i>>>0>>16,this.buffer[1]+=e[1]*n[3]+e[2]*n[2]+e[3]*n[1],this.buffer[1]+=e[0]*n[3]+e[1]*n[2]+e[2]*n[1]+e[3]*n[0]<<16,this}},{key:"_plus",value:function(t){var e=this.buffer[0]+t.buffer[0]>>>0;this.buffer[1]+=t.buffer[1],e>>0&&++this.buffer[1],this.buffer[0]=e}},{key:"lessThan",value:function(t){return this.buffer[1]1&&void 0!==arguments[1]?arguments[1]:new Uint32Array(2);return n.fromString("string"===typeof t?t:t.toString(),e)}},{key:"fromNumber",value:function(t){var e=arguments.length>1&&void 0!==arguments[1]?arguments[1]:new Uint32Array(2);return n.fromString(t.toString(),e)}},{key:"fromString",value:function(t){for(var e=arguments.length>1&&void 0!==arguments[1]?arguments[1]:new Uint32Array(2),r=t.length,i=new n(e),a=0;a1&&void 0!==arguments[1]?arguments[1]:new Uint32Array(2);return n.fromString("string"===typeof t?t:t.toString(),e)}},{key:"fromNumber",value:function(t){var e=arguments.length>1&&void 0!==arguments[1]?arguments[1]:new Uint32Array(2);return n.fromString(t.toString(),e)}},{key:"fromString",value:function(t){for(var e=arguments.length>1&&void 0!==arguments[1]?arguments[1]:new Uint32Array(2),r=t.startsWith("-"),i=t.length,a=new n(e),o=r?1:0;o>>0,e[2]=this.buffer[2]+t.buffer[2]>>>0,e[1]=this.buffer[1]+t.buffer[1]>>>0,e[0]=this.buffer[0]+t.buffer[0]>>>0,e[0]>>0&&++e[1],e[1]>>0&&++e[2],e[2]>>0&&++e[3],this.buffer[3]=e[3],this.buffer[2]=e[2],this.buffer[1]=e[1],this.buffer[0]=e[0],this}},{key:"hex",value:function(){return"".concat(ro(this.buffer[3])," ").concat(ro(this.buffer[2])," ").concat(ro(this.buffer[1])," ").concat(ro(this.buffer[0]))}}],[{key:"multiply",value:function(e,n){return new t(new Uint32Array(e.buffer)).times(n)}},{key:"add",value:function(e,n){return new t(new Uint32Array(e.buffer)).plus(n)}},{key:"from",value:function(e){var n=arguments.length>1&&void 0!==arguments[1]?arguments[1]:new Uint32Array(4);return t.fromString("string"===typeof e?e:e.toString(),n)}},{key:"fromNumber",value:function(e){var n=arguments.length>1&&void 0!==arguments[1]?arguments[1]:new Uint32Array(4);return t.fromString(e.toString(),n)}},{key:"fromString",value:function(e){for(var n=arguments.length>1&&void 0!==arguments[1]?arguments[1]:new Uint32Array(4),r=e.startsWith("-"),i=e.length,a=new t(n),o=r?1:0;o1&&void 0!==arguments[1]?arguments[1]:this.nextFieldNode(),n=e.length;return yr.Null(t,0,n)}},{key:"visitBool",value:function(t){var e=arguments.length>1&&void 0!==arguments[1]?arguments[1]:this.nextFieldNode(),n=e.length,r=e.nullCount;return yr.Bool(t,0,n,r,this.readNullBitmap(t,r),this.readData(t))}},{key:"visitInt",value:function(t){var e=arguments.length>1&&void 0!==arguments[1]?arguments[1]:this.nextFieldNode(),n=e.length,r=e.nullCount;return yr.Int(t,0,n,r,this.readNullBitmap(t,r),this.readData(t))}},{key:"visitFloat",value:function(t){var e=arguments.length>1&&void 0!==arguments[1]?arguments[1]:this.nextFieldNode(),n=e.length,r=e.nullCount;return yr.Float(t,0,n,r,this.readNullBitmap(t,r),this.readData(t))}},{key:"visitUtf8",value:function(t){var e=arguments.length>1&&void 0!==arguments[1]?arguments[1]:this.nextFieldNode(),n=e.length,r=e.nullCount;return yr.Utf8(t,0,n,r,this.readNullBitmap(t,r),this.readOffsets(t),this.readData(t))}},{key:"visitBinary",value:function(t){var e=arguments.length>1&&void 0!==arguments[1]?arguments[1]:this.nextFieldNode(),n=e.length,r=e.nullCount;return yr.Binary(t,0,n,r,this.readNullBitmap(t,r),this.readOffsets(t),this.readData(t))}},{key:"visitFixedSizeBinary",value:function(t){var e=arguments.length>1&&void 0!==arguments[1]?arguments[1]:this.nextFieldNode(),n=e.length,r=e.nullCount;return yr.FixedSizeBinary(t,0,n,r,this.readNullBitmap(t,r),this.readData(t))}},{key:"visitDate",value:function(t){var e=arguments.length>1&&void 0!==arguments[1]?arguments[1]:this.nextFieldNode(),n=e.length,r=e.nullCount;return yr.Date(t,0,n,r,this.readNullBitmap(t,r),this.readData(t))}},{key:"visitTimestamp",value:function(t){var e=arguments.length>1&&void 0!==arguments[1]?arguments[1]:this.nextFieldNode(),n=e.length,r=e.nullCount;return yr.Timestamp(t,0,n,r,this.readNullBitmap(t,r),this.readData(t))}},{key:"visitTime",value:function(t){var e=arguments.length>1&&void 0!==arguments[1]?arguments[1]:this.nextFieldNode(),n=e.length,r=e.nullCount;return yr.Time(t,0,n,r,this.readNullBitmap(t,r),this.readData(t))}},{key:"visitDecimal",value:function(t){var e=arguments.length>1&&void 0!==arguments[1]?arguments[1]:this.nextFieldNode(),n=e.length,r=e.nullCount;return yr.Decimal(t,0,n,r,this.readNullBitmap(t,r),this.readData(t))}},{key:"visitList",value:function(t){var e=arguments.length>1&&void 0!==arguments[1]?arguments[1]:this.nextFieldNode(),n=e.length,r=e.nullCount;return yr.List(t,0,n,r,this.readNullBitmap(t,r),this.readOffsets(t),this.visit(t.children[0]))}},{key:"visitStruct",value:function(t){var e=arguments.length>1&&void 0!==arguments[1]?arguments[1]:this.nextFieldNode(),n=e.length,r=e.nullCount;return yr.Struct(t,0,n,r,this.readNullBitmap(t,r),this.visitMany(t.children))}},{key:"visitUnion",value:function(t){return t.mode===en.Sparse?this.visitSparseUnion(t):this.visitDenseUnion(t)}},{key:"visitDenseUnion",value:function(t){var e=arguments.length>1&&void 0!==arguments[1]?arguments[1]:this.nextFieldNode(),n=e.length,r=e.nullCount;return yr.Union(t,0,n,r,this.readNullBitmap(t,r),this.readTypeIds(t),this.readOffsets(t),this.visitMany(t.children))}},{key:"visitSparseUnion",value:function(t){var e=arguments.length>1&&void 0!==arguments[1]?arguments[1]:this.nextFieldNode(),n=e.length,r=e.nullCount;return yr.Union(t,0,n,r,this.readNullBitmap(t,r),this.readTypeIds(t),this.visitMany(t.children))}},{key:"visitDictionary",value:function(t){var e=arguments.length>1&&void 0!==arguments[1]?arguments[1]:this.nextFieldNode(),n=e.length,r=e.nullCount;return yr.Dictionary(t,0,n,r,this.readNullBitmap(t,r),this.readData(t.indices),this.readDictionary(t))}},{key:"visitInterval",value:function(t){var e=arguments.length>1&&void 0!==arguments[1]?arguments[1]:this.nextFieldNode(),n=e.length,r=e.nullCount;return yr.Interval(t,0,n,r,this.readNullBitmap(t,r),this.readData(t))}},{key:"visitFixedSizeList",value:function(t){var e=arguments.length>1&&void 0!==arguments[1]?arguments[1]:this.nextFieldNode(),n=e.length,r=e.nullCount;return yr.FixedSizeList(t,0,n,r,this.readNullBitmap(t,r),this.visit(t.children[0]))}},{key:"visitMap",value:function(t){var e=arguments.length>1&&void 0!==arguments[1]?arguments[1]:this.nextFieldNode(),n=e.length,r=e.nullCount;return yr.Map(t,0,n,r,this.readNullBitmap(t,r),this.readOffsets(t),this.visit(t.children[0]))}},{key:"nextFieldNode",value:function(){return this.nodes[++this.nodesIndex]}},{key:"nextBufferRange",value:function(){return this.buffers[++this.buffersIndex]}},{key:"readNullBitmap",value:function(t,e){var n=arguments.length>2&&void 0!==arguments[2]?arguments[2]:this.nextBufferRange();return e>0&&this.readData(t,n)||new Uint8Array(0)}},{key:"readOffsets",value:function(t,e){return this.readData(t,e)}},{key:"readTypeIds",value:function(t,e){return this.readData(t,e)}},{key:"readData",value:function(t){var e=arguments.length>1&&void 0!==arguments[1]?arguments[1]:this.nextBufferRange(),n=e.length,r=e.offset;return this.bytes.subarray(r,r+n)}},{key:"readDictionary",value:function(t){return this.dictionaries.get(t.id)}}]),n}(bn),fo=function(t){ot(n,t);var e=yt(n);function n(t,r,i,a){var o;return F(this,n),(o=e.call(this,new Uint8Array(0),r,i,a)).sources=t,o}return E(n,[{key:"readNullBitmap",value:function(t,e){var n=arguments.length>2&&void 0!==arguments[2]?arguments[2]:this.nextBufferRange(),r=n.offset;return e<=0?new Uint8Array(0):ln(this.sources[r])}},{key:"readOffsets",value:function(t){var e=arguments.length>1&&void 0!==arguments[1]?arguments[1]:this.nextBufferRange(),n=e.offset;return Ht(Uint8Array,Ht(Int32Array,this.sources[n]))}},{key:"readTypeIds",value:function(t){var e=arguments.length>1&&void 0!==arguments[1]?arguments[1]:this.nextBufferRange(),n=e.offset;return Ht(Uint8Array,Ht(t.ArrayType,this.sources[n]))}},{key:"readData",value:function(t){var e=arguments.length>1&&void 0!==arguments[1]?arguments[1]:this.nextBufferRange(),n=e.offset,r=this.sources;return Fn.isTimestamp(t)||(Fn.isInt(t)||Fn.isTime(t))&&64===t.bitWidth||Fn.isDate(t)&&t.unit===Qe.MILLISECOND?Ht(Uint8Array,uo.convertArray(r[n])):Fn.isDecimal(t)?Ht(Uint8Array,so.convertArray(r[n])):Fn.isBinary(t)||Fn.isFixedSizeBinary(t)?lo(r[n]):Fn.isBool(t)?ln(r[n]):Fn.isUtf8(t)?it(r[n].join("")):Ht(Uint8Array,Ht(t.ArrayType,r[n].map((function(t){return+t}))))}}]),n}(co);function lo(t){for(var e=t.join(""),n=new Uint8Array(e.length/2),r=0;r>1]=parseInt(e.substr(r,2),16);return n}var ho=W.Long,yo=Ye.apache.arrow.flatbuf.Null,po=Ye.apache.arrow.flatbuf.Int,vo=Ye.apache.arrow.flatbuf.FloatingPoint,bo=Ye.apache.arrow.flatbuf.Binary,go=Ye.apache.arrow.flatbuf.Bool,mo=Ye.apache.arrow.flatbuf.Utf8,ko=Ye.apache.arrow.flatbuf.Decimal,wo=Ye.apache.arrow.flatbuf.Date,_o=Ye.apache.arrow.flatbuf.Time,Io=Ye.apache.arrow.flatbuf.Timestamp,So=Ye.apache.arrow.flatbuf.Interval,xo=Ye.apache.arrow.flatbuf.List,Ao=Ye.apache.arrow.flatbuf.Struct_,To=Ye.apache.arrow.flatbuf.Union,Bo=Ye.apache.arrow.flatbuf.DictionaryEncoding,Oo=Ye.apache.arrow.flatbuf.FixedSizeBinary,Do=Ye.apache.arrow.flatbuf.FixedSizeList,Lo=Ye.apache.arrow.flatbuf.Map,Fo=function(t){ot(n,t);var e=yt(n);function n(){return F(this,n),e.apply(this,arguments)}return E(n,[{key:"visit",value:function(t,e){return null==t||null==e?void 0:ze(ut(n.prototype),"visit",this).call(this,t,e)}},{key:"visitNull",value:function(t,e){return yo.startNull(e),yo.endNull(e)}},{key:"visitInt",value:function(t,e){return po.startInt(e),po.addBitWidth(e,t.bitWidth),po.addIsSigned(e,t.isSigned),po.endInt(e)}},{key:"visitFloat",value:function(t,e){return vo.startFloatingPoint(e),vo.addPrecision(e,t.precision),vo.endFloatingPoint(e)}},{key:"visitBinary",value:function(t,e){return bo.startBinary(e),bo.endBinary(e)}},{key:"visitBool",value:function(t,e){return go.startBool(e),go.endBool(e)}},{key:"visitUtf8",value:function(t,e){return mo.startUtf8(e),mo.endUtf8(e)}},{key:"visitDecimal",value:function(t,e){return ko.startDecimal(e),ko.addScale(e,t.scale),ko.addPrecision(e,t.precision),ko.endDecimal(e)}},{key:"visitDate",value:function(t,e){return wo.startDate(e),wo.addUnit(e,t.unit),wo.endDate(e)}},{key:"visitTime",value:function(t,e){return _o.startTime(e),_o.addUnit(e,t.unit),_o.addBitWidth(e,t.bitWidth),_o.endTime(e)}},{key:"visitTimestamp",value:function(t,e){var n=t.timezone&&e.createString(t.timezone)||void 0;return Io.startTimestamp(e),Io.addUnit(e,t.unit),void 0!==n&&Io.addTimezone(e,n),Io.endTimestamp(e)}},{key:"visitInterval",value:function(t,e){return So.startInterval(e),So.addUnit(e,t.unit),So.endInterval(e)}},{key:"visitList",value:function(t,e){return xo.startList(e),xo.endList(e)}},{key:"visitStruct",value:function(t,e){return Ao.startStruct_(e),Ao.endStruct_(e)}},{key:"visitUnion",value:function(t,e){To.startTypeIdsVector(e,t.typeIds.length);var n=To.createTypeIdsVector(e,t.typeIds);return To.startUnion(e),To.addMode(e,t.mode),To.addTypeIds(e,n),To.endUnion(e)}},{key:"visitDictionary",value:function(t,e){var n=this.visit(t.indices,e);return Bo.startDictionaryEncoding(e),Bo.addId(e,new ho(t.id,0)),Bo.addIsOrdered(e,t.isOrdered),void 0!==n&&Bo.addIndexType(e,n),Bo.endDictionaryEncoding(e)}},{key:"visitFixedSizeBinary",value:function(t,e){return Oo.startFixedSizeBinary(e),Oo.addByteWidth(e,t.byteWidth),Oo.endFixedSizeBinary(e)}},{key:"visitFixedSizeList",value:function(t,e){return Do.startFixedSizeList(e),Do.addListSize(e,t.listSize),Do.endFixedSizeList(e)}},{key:"visitMap",value:function(t,e){return Lo.startMap(e),Lo.addKeysSorted(e,t.keysSorted),Lo.endMap(e)}}]),n}(bn),Mo=new Fo;function Eo(t){return new nu(t.count,Co(t.columns),Vo(t.columns))}function Uo(t,e){return(t.fields||[]).filter(Boolean).map((function(t){return sa.fromJSON(t,e)}))}function No(t,e){return(t.children||[]).filter(Boolean).map((function(t){return sa.fromJSON(t,e)}))}function Co(t){return(t||[]).reduce((function(t,e){return[].concat(vn(t),[new au(e.count,(n=e.VALIDITY,(n||[]).reduce((function(t,e){return t+ +(0===e)}),0)))],vn(Co(e.children)));var n}),[])}function Vo(t){for(var e=arguments.length>1&&void 0!==arguments[1]?arguments[1]:[],n=-1,r=(t||[]).length;++n1&&void 0!==arguments[1]?arguments[1]:0;if(e instanceof ua)return new t(0,an.V4,rn.Schema,e);if(e instanceof nu)return new t(n,an.V4,rn.RecordBatch,e);if(e instanceof ru)return new t(n,an.V4,rn.DictionaryBatch,e);throw new Error("Unrecognized Message header: ".concat(e))}}]),t}(),nu=function(){function t(e,n,r){F(this,t),this._nodes=n,this._buffers=r,this._length="number"===typeof e?e:e.low}return E(t,[{key:"nodes",get:function(){return this._nodes}},{key:"length",get:function(){return this._length}},{key:"buffers",get:function(){return this._buffers}}]),t}(),ru=function(){function t(e,n){var r=arguments.length>2&&void 0!==arguments[2]&&arguments[2];F(this,t),this._data=e,this._isDelta=r,this._id="number"===typeof n?n:n.low}return E(t,[{key:"id",get:function(){return this._id}},{key:"data",get:function(){return this._data}},{key:"isDelta",get:function(){return this._isDelta}},{key:"length",get:function(){return this.data.length}},{key:"nodes",get:function(){return this.data.nodes}},{key:"buffers",get:function(){return this.data.buffers}}]),t}(),iu=E((function t(e,n){F(this,t),this.offset="number"===typeof e?e:e.low,this.length="number"===typeof n?n:n.low})),au=E((function t(e,n){F(this,t),this.length="number"===typeof e?e:e.low,this.nullCount="number"===typeof n?n:n.low}));function ou(t){for(var e,n=[],r=-1,i=-1,a=t.nodesLength();++r0?$o.createCustomMetadataVector(t,vn(e.metadata).map((function(e){var n=U(e,2),r=n[0],i=n[1],a=t.createString("".concat(r)),o=t.createString("".concat(i));return Jo.startKeyValue(t),Jo.addKey(t,a),Jo.addValue(t,o),Jo.endKeyValue(t)}))):-1;e.name&&(n=t.createString(e.name));$o.startField(t),$o.addType(t,r),$o.addTypeType(t,o),$o.addChildren(t,s),$o.addNullable(t,!!e.nullable),-1!==n&&$o.addName(t,n);-1!==i&&$o.addDictionary(t,i);-1!==c&&$o.addCustomMetadata(t,c);return $o.endField(t)},sa.decode=function(t,e){var n,r,i,a,o,u;e&&(u=t.dictionary())?e.has(n=u.id().low)?(a=(a=u.indexType())?lu(a):new Cn,o=new lr(e.get(n),a,n,u.isOrdered()),r=new sa(t.name(),o,t.nullable(),fu(t))):(a=(a=u.indexType())?lu(a):new Cn,e.set(n,i=hu(t,cu(t,e))),o=new lr(i,a,n,u.isOrdered()),r=new sa(t.name(),o,t.nullable(),fu(t))):(i=hu(t,cu(t,e)),r=new sa(t.name(),i,t.nullable(),fu(t)));return r||null},sa.fromJSON=function(t,e){var n,r,i,a,o,u;return e&&(a=t.dictionary)?e.has(n=a.id)?(r=(r=a.indexType)?Ro(r):new Cn,u=new lr(e.get(n),r,n,a.isOrdered),i=new sa(t.name,u,t.nullable,jo(t.customMetadata))):(r=(r=a.indexType)?Ro(r):new Cn,e.set(n,o=Po(t,No(t,e))),u=new lr(o,r,n,a.isOrdered),i=new sa(t.name,u,t.nullable,jo(t.customMetadata))):(o=Po(t,No(t,e)),i=new sa(t.name,o,t.nullable,jo(t.customMetadata))),i||null},ua.encode=function(t,e){var n=e.fields.map((function(e){return sa.encode(t,e)}));Ko.startFieldsVector(t,n.length);var r=Ko.createFieldsVector(t,n),i=e.metadata&&e.metadata.size>0?Ko.createCustomMetadataVector(t,vn(e.metadata).map((function(e){var n=U(e,2),r=n[0],i=n[1],a=t.createString("".concat(r)),o=t.createString("".concat(i));return Jo.startKeyValue(t),Jo.addKey(t,a),Jo.addValue(t,o),Jo.endKeyValue(t)}))):-1;Ko.startSchema(t),Ko.addFields(t,r),Ko.addEndianness(t,yu?Qo.Little:Qo.Big),-1!==i&&Ko.addCustomMetadata(t,i);return Ko.endSchema(t)},ua.decode=function(t){var e=arguments.length>1&&void 0!==arguments[1]?arguments[1]:new Map,n=su(t,e);return new ua(n,fu(t),e)},ua.fromJSON=function(t){var e=arguments.length>1&&void 0!==arguments[1]?arguments[1]:new Map;return new ua(Uo(t,e),jo(t.customMetadata),e)},nu.encode=function(t,e){var n=e.nodes||[],r=e.buffers||[];Xo.startNodesVector(t,n.length),n.slice().reverse().forEach((function(e){return au.encode(t,e)}));var i=t.endVector();Xo.startBuffersVector(t,r.length),r.slice().reverse().forEach((function(e){return iu.encode(t,e)}));var a=t.endVector();return Xo.startRecordBatch(t),Xo.addLength(t,new zo(e.length,0)),Xo.addNodes(t,i),Xo.addBuffers(t,a),Xo.endRecordBatch(t)},nu.decode=function(t){var e=arguments.length>1&&void 0!==arguments[1]?arguments[1]:an.V4;return new nu(t.length(),ou(t),uu(t,e))},nu.fromJSON=Eo,ru.encode=function(t,e){var n=nu.encode(t,e.data);return tu.startDictionaryBatch(t),tu.addId(t,new zo(e.id,0)),tu.addIsDelta(t,e.isDelta),tu.addData(t,n),tu.endDictionaryBatch(t)},ru.decode=function(t){var e=arguments.length>1&&void 0!==arguments[1]?arguments[1]:an.V4;return new ru(nu.decode(t.data(),e),t.id(),t.isDelta())},ru.fromJSON=function(t){return new ru(Eo(t.data),t.id,t.isDelta)},au.encode=function(t,e){return Zo.createFieldNode(t,new zo(e.length,0),new zo(e.nullCount,0))},au.decode=function(t){return new au(t.length(),t.nullCount())},iu.encode=function(t,e){return Go.createBuffer(t,new zo(e.offset,0),new zo(e.length,0))},iu.decode=function(t){return new iu(t.offset(),t.length())};for(var yu=function(){var t=new ArrayBuffer(2);return new DataView(t).setInt16(0,256,!0),256===new Int16Array(t)[0]}(),pu=W.ByteBuffer,du=function(t){return"Expected ".concat(rn[t]," Message in stream, but was null or length 0.")},vu=function(t){return"Header pointer of flatbuffer-encoded ".concat(rn[t]," Message is null or length 0.")},bu=function(t,e){return"Expected to read ".concat(t," metadata bytes, but only read ").concat(e,".")},gu=function(t,e){return"Expected to read ".concat(t," bytes for message body, but only read ").concat(e,".")},mu=function(t){function e(t){F(this,e),this.source=t instanceof Za?t:new Za(t)}return E(e,[{key:Symbol.iterator,value:function(){return this}},{key:"next",value:function(){var t;return(t=this.readMetadataLength()).done||-1===t.value&&(t=this.readMetadataLength()).done||(t=this.readMetadata(t.value)).done?pt:t}},{key:"throw",value:function(t){return this.source.throw(t)}},{key:"return",value:function(t){return this.source.return(t)}},{key:"readMessage",value:function(t){var e;if((e=this.next()).done)return null;if(null!=t&&e.value.headerType!==t)throw new Error(du(t));return e.value}},{key:"readMessageBody",value:function(t){if(t<=0)return new Uint8Array(0);var e=Jt(this.source.read(t));if(e.byteLength0&&void 0!==arguments[0]&&arguments[0],e=rn.Schema,n=this.readMessage(e),r=n&&n.header();if(t&&!r)throw new Error(vu(e));return r}},{key:"readMetadataLength",value:function(){var t=this.source.read(_u),e=t&&new pu(t),n=e&&e.readInt32(0)||0;return{done:0===n,value:n}}},{key:"readMetadata",value:function(t){var e=this.source.read(t);if(!e)return pt;if(e.byteLength0&&void 0!==a[0]&&a[0],n=rn.Schema,t.next=4,this.readMessage(n);case 4:if(r=t.sent,i=r&&r.header(),!e||i){t.next=8;break}throw new Error(vu(n));case 8:return t.abrupt("return",i);case 9:case"end":return t.stop()}}),t,this)})));return function(){return t.apply(this,arguments)}}()},{key:"readMetadataLength",value:function(){var t=L(R.mark((function t(){var e,n,r;return R.wrap((function(t){for(;;)switch(t.prev=t.next){case 0:return t.next=2,this.source.read(_u);case 2:return e=t.sent,n=e&&new pu(e),r=n&&n.readInt32(0)||0,t.abrupt("return",{done:0===r,value:r});case 6:case"end":return t.stop()}}),t,this)})));return function(){return t.apply(this,arguments)}}()},{key:"readMetadata",value:function(){var t=L(R.mark((function t(e){var n;return R.wrap((function(t){for(;;)switch(t.prev=t.next){case 0:return t.next=2,this.source.read(e);case 2:if(n=t.sent){t.next=5;break}return t.abrupt("return",pt);case 5:if(!(n.byteLength1&&void 0!==arguments[1]?arguments[1]:0,n=-1,r=Su.length;++n2147483647)throw new RangeError("Cannot write arrays larger than 2^31 - 1 in length");Fn.isNull(t.type)||Lu.call(this,i<=0?new Uint8Array(0):fn(e.offset,r,e.nullBitmap)),this.nodes.push(new au(r,i))}return ze(ut(n.prototype),"visit",this).call(this,t)}},{key:"visitNull",value:function(t){return this}},{key:"visitDictionary",value:function(t){return this.visit(t.indices)}},{key:"nodes",get:function(){return this._nodes}},{key:"buffers",get:function(){return this._buffers}},{key:"byteLength",get:function(){return this._byteLength}},{key:"bufferRegions",get:function(){return this._bufferRegions}}],[{key:"assemble",value:function(){for(var t=new n,e=arguments.length,r=new Array(e),i=0;i=t.length?Lu.call(this,new Uint8Array(0)):(e=t.values)instanceof Uint8Array?Lu.call(this,fn(t.offset,t.length,e)):Lu.call(this,ln(t))},Du.prototype.visitInt=Fu,Du.prototype.visitFloat=Fu,Du.prototype.visitUtf8=Mu,Du.prototype.visitBinary=Mu,Du.prototype.visitFixedSizeBinary=Fu,Du.prototype.visitDate=Fu,Du.prototype.visitTimestamp=Fu,Du.prototype.visitTime=Fu,Du.prototype.visitDecimal=Fu,Du.prototype.visitList=Eu,Du.prototype.visitStruct=Uu,Du.prototype.visitUnion=function(t){var e=t.type,n=t.length,r=t.typeIds,i=t.valueOffsets;if(Lu.call(this,r),e.mode===en.Sparse)return Uu.call(this,t);if(e.mode===en.Dense){if(t.offset<=0)return Lu.call(this,i),Uu.call(this,t);for(var a,o,u=r.reduce((function(t,e){return Math.max(t,e)}),r[0]),s=new Int32Array(u+1),c=new Int32Array(u+1).fill(-1),f=new Int32Array(n),l=xe(-i[0],n,i),h=-1;++h0&&void 0!==arguments[0]&&arguments[0];return this._sink.toString(t)}},{key:"toUint8Array",value:function(){var t=arguments.length>0&&void 0!==arguments[0]&&arguments[0];return this._sink.toUint8Array(t)}},{key:"writeAll",value:function(t){var e=this;return Ot(t)?t.then((function(t){return e.writeAll(t)})):Lt(t)?Ru(this,t):ju(this,t)}},{key:"closed",get:function(){return this._sink.closed}},{key:e,value:function(){return this._sink[Symbol.asyncIterator]()}},{key:"toDOMStream",value:function(t){return this._sink.toDOMStream(t)}},{key:"toNodeStream",value:function(t){return this._sink.toNodeStream(t)}},{key:"close",value:function(){return this.reset()._sink.close()}},{key:"abort",value:function(t){return this.reset()._sink.abort(t)}},{key:"finish",value:function(){return this._autoDestroy?this.close():this.reset(this._sink,this._schema),this}},{key:"reset",value:function(){var t=arguments.length>0&&void 0!==arguments[0]?arguments[0]:this._sink,e=arguments.length>1&&void 0!==arguments[1]?arguments[1]:null;return t===this._sink||t instanceof Ja?this._sink=t:(this._sink=new Ja,t&&Nt(t)?this.toDOMStream({type:"bytes"}).pipeTo(t):t&&Vt(t)&&this.toNodeStream({objectMode:!1}).pipe(t)),this._started&&this._schema&&this._writeFooter(this._schema),this._started=!1,this._dictionaryBlocks=[],this._recordBatchBlocks=[],this._dictionaryDeltaOffsets=new Map,e&&e.compareTo(this._schema)||(null===e?(this._position=0,this._schema=null):(this._started=!0,this._schema=e,this._writeSchema(e))),this}},{key:"write",value:function(t){var e=null;if(!this._sink)throw new Error("RecordBatchWriter is closed");if(null===t||void 0===t)return this.finish()&&void 0;if(t instanceof Ec&&!(e=t.schema))return this.finish()&&void 0;if(t instanceof Uc&&!(e=t.schema))return this.finish()&&void 0;if(e&&!e.compareTo(this._schema)){if(this._started&&this._autoDestroy)return this.close();this.reset(this._sink,e)}t instanceof Uc?t instanceof Nc||this._writeRecordBatch(t):t instanceof Ec?this.writeAll(t.chunks):Dt(t)&&this.writeAll(t)}},{key:"_writeMessage",value:function(t){var e=arguments.length>1&&void 0!==arguments[1]?arguments[1]:8,n=e-1,r=eu.encode(t),i=r.byteLength,a=this._writeLegacyIpcFormat?4:8,o=i+a+n&~n,u=o-i-a;return t.headerType===rn.RecordBatch?this._recordBatchBlocks.push(new qa(o,t.bodyLength,this._position)):t.headerType===rn.DictionaryBatch&&this._dictionaryBlocks.push(new qa(o,t.bodyLength,this._position)),this._writeLegacyIpcFormat||this._write(Int32Array.of(-1)),this._write(Int32Array.of(o-a)),i>0&&this._write(r),this._writePadding(u)}},{key:"_write",value:function(t){if(this._started){var e=Jt(t);e&&e.byteLength>0&&(this._sink.write(e),this._position+=e.byteLength)}return this}},{key:"_writeSchema",value:function(t){return this._writeMessage(eu.from(t))}},{key:"_writeFooter",value:function(t){return this._writeLegacyIpcFormat?this._write(Int32Array.of(0)):this._write(Int32Array.of(-1,0))}},{key:"_writeMagic",value:function(){return this._write(Su)}},{key:"_writePadding",value:function(t){return t>0?this._write(new Uint8Array(t)):this}},{key:"_writeRecordBatch",value:function(t){var e=Du.assemble(t),n=e.byteLength,r=e.nodes,i=e.bufferRegions,a=e.buffers,o=new nu(t.length,r,i),u=eu.from(o,n);return this._writeDictionaries(t)._writeMessage(u)._writeBodyBuffers(a)}},{key:"_writeDictionaryBatch",value:function(t,e){var n=arguments.length>2&&void 0!==arguments[2]&&arguments[2];this._dictionaryDeltaOffsets.set(e,t.length+(this._dictionaryDeltaOffsets.get(e)||0));var r=Du.assemble(t),i=r.byteLength,a=r.nodes,o=r.bufferRegions,u=r.buffers,s=new nu(t.length,a,o),c=new ru(s,e,n),f=eu.from(c,i);return this._writeMessage(f)._writeBodyBuffers(u)}},{key:"_writeBodyBuffers",value:function(t){for(var e,n,r,i=-1,a=t.length;++i0&&(this._write(e),(r=(n+7&-8)-n)>0&&this._writePadding(r));return this}},{key:"_writeDictionaries",value:function(t){var e,n=O(t.dictionaries);try{for(n.s();!(e=n.n()).done;){var r=U(e.value,2),i=r[0],a=r[1],o=this._dictionaryDeltaOffsets.get(i)||0;if(0===o||(a=a.slice(o)).length>0){var u,s=O("chunks"in a?a.chunks:[a]);try{for(s.s();!(u=s.n()).done;){var c=u.value;this._writeDictionaryBatch(c,i,o>0),o+=c.length}}catch(f){s.e(f)}finally{s.f()}}}}catch(f){n.e(f)}finally{n.f()}return this}}],[{key:"throughNode",value:function(t){throw new Error('"throughNode" not available in this environment')}},{key:"throughDOM",value:function(t,e){throw new Error('"throughDOM" not available in this environment')}}]),r}(vt,Symbol.asyncIterator),Cu=function(t){ot(n,t);var e=yt(n);function n(){return F(this,n),e.apply(this,arguments)}return E(n,null,[{key:"writeAll",value:function(t,e){var r=new n(e);return Ot(t)?t.then((function(t){return r.writeAll(t)})):Lt(t)?Ru(r,t):ju(r,t)}}]),n}(Nu),Vu=function(t){ot(n,t);var e=yt(n);function n(){var t;return F(this,n),(t=e.call(this))._autoDestroy=!0,t}return E(n,[{key:"_writeSchema",value:function(t){return this._writeMagic()._writePadding(2)}},{key:"_writeFooter",value:function(t){var e=Ka.encode(new Ka(t,an.V4,this._recordBatchBlocks,this._dictionaryBlocks));return ze(ut(n.prototype),"_writeFooter",this).call(this,t)._write(e)._write(Int32Array.of(e.byteLength))._writeMagic()}}],[{key:"writeAll",value:function(t){var e=new n;return Ot(t)?t.then((function(t){return e.writeAll(t)})):Lt(t)?Ru(e,t):ju(e,t)}}]),n}(Nu);function ju(t,e){var n=e;e instanceof Ec&&(n=e.chunks,t.reset(void 0,e.schema));var r,i=O(n);try{for(i.s();!(r=i.n()).done;){var a=r.value;t.write(a)}}catch(o){i.e(o)}finally{i.f()}return t.finish()}function Ru(t,e){return Pu.apply(this,arguments)}function Pu(){return(Pu=L(R.mark((function t(e,n){var r,i,a,o,u,s;return R.wrap((function(t){for(;;)switch(t.prev=t.next){case 0:r=!1,i=!1,t.prev=2,o=P(n);case 4:return t.next=6,o.next();case 6:if(!(r=!(u=t.sent).done)){t.next=12;break}s=u.value,e.write(s);case 9:r=!1,t.next=4;break;case 12:t.next=18;break;case 14:t.prev=14,t.t0=t.catch(2),i=!0,a=t.t0;case 18:if(t.prev=18,t.prev=19,!r||null==o.return){t.next=23;break}return t.next=23,o.return();case 23:if(t.prev=23,!i){t.next=26;break}throw a;case 26:return t.finish(23);case 27:return t.finish(18);case 28:return t.abrupt("return",e.finish());case 29:case"end":return t.stop()}}),t,null,[[2,14,18,28],[19,,23,27]])})))).apply(this,arguments)}var zu=new Uint8Array(0),Yu=function(t){return[zu,zu,new Uint8Array(t),zu]};function Wu(t,e){for(var n,r,i=arguments.length>2&&void 0!==arguments[2]?arguments[2]:e.reduce((function(t,e){return Math.max(t,e.length)}),0),a=-1,o=e.length,u=vn(t.fields),s=[],c=(i+63&-64)>>3;++a0;){for(u=Number.POSITIVE_INFINITY,s=-1;++s0&&(i[o++]=[u,f.slice()]))}return[t=new ua(r,t.metadata),i.map((function(e){return zr(Uc,[t].concat(vn(e)))}))]}(t,e.map((function(t){return t instanceof Wi?t.chunks.map((function(t){return t.data})):[t.data]})))}function Ku(t,e,n,r,i){for(var a,o,u=0,s=-1,c=r.length,f=(e+63&-64)>>3;++s=e?u===e?n[s]=a:(n[s]=a.slice(0,e),a=a.slice(e,u-e),i.numBatches=Math.max(i.numBatches,r[s].unshift(a))):((o=t[s]).nullable||(t[s]=o.clone({nullable:!0})),n[s]=a?a._changeLengthAndBackfillNullBitmap(e):yr.new(o.type,0,e,e,Yu(f)));return n}function Gu(t,e){if(null==t)return{};var n,r,i=function(t,e){if(null==t)return{};var n,r,i={},a=Object.keys(t);for(r=0;r=0||(i[n]=t[n]);return i}(t,e);if(Object.getOwnPropertySymbols){var a=Object.getOwnPropertySymbols(t);for(r=0;r=0||Object.prototype.propertyIsEnumerable.call(t,n)&&(i[n]=t[n])}return i}var qu=function(t,e){ot(r,t);var n=yt(r);function r(t,e){var i;return F(this,r),(i=n.call(this))._children=e,i.numChildren=t.childData.length,i._bindDataAccessors(i.data=t),i}return E(r,[{key:"type",get:function(){return this.data.type}},{key:"typeId",get:function(){return this.data.typeId}},{key:"length",get:function(){return this.data.length}},{key:"offset",get:function(){return this.data.offset}},{key:"stride",get:function(){return this.data.stride}},{key:"nullCount",get:function(){return this.data.nullCount}},{key:"byteLength",get:function(){return this.data.byteLength}},{key:"VectorName",get:function(){return"".concat(Je[this.typeId],"Vector")}},{key:"ArrayType",get:function(){return this.type.ArrayType}},{key:"values",get:function(){return this.data.values}},{key:"typeIds",get:function(){return this.data.typeIds}},{key:"nullBitmap",get:function(){return this.data.nullBitmap}},{key:"valueOffsets",get:function(){return this.data.valueOffsets}},{key:e,get:function(){return"".concat(this.VectorName,"<").concat(this.type[Symbol.toStringTag],">")}},{key:"clone",value:function(t){var e=arguments.length>1&&void 0!==arguments[1]?arguments[1]:this._children;return qe.new(t,e)}},{key:"concat",value:function(){for(var t=arguments.length,e=new Array(t),n=0;n0){var e=this.offset+t;return 0!==(this.nullBitmap[e>>3]&1<=this.numChildren?null:(this._children||(this._children=[]))[t]||(this._children[t]=qe.new(this.data.childData[t]))}},{key:"toJSON",value:function(){return vn(this)}},{key:"_sliceInternal",value:function(t,e,n){return t.clone(t.data.slice(e,n-e),null)}},{key:"_bindDataAccessors",value:function(t){}}]),r}(qe,Symbol.toStringTag);qu.prototype[Symbol.isConcatSpreadable]=!0;var Ju=function(t){ot(n,t);var e=yt(n);function n(){return F(this,n),e.apply(this,arguments)}return E(n,[{key:"asUtf8",value:function(){return qe.new(this.data.clone(new Gn))}}]),n}(qu),Zu=function(t){ot(n,t);var e=yt(n);function n(){return F(this,n),e.apply(this,arguments)}return E(n,null,[{key:"from",value:function(t){return Mc((function(){return new qn}),t)}}]),n}(qu),Qu=function(t){ot(n,t);var e=yt(n);function n(){return F(this,n),e.apply(this,arguments)}return E(n,null,[{key:"from",value:function(){for(var t=arguments.length,e=new Array(t),n=0;n>>0)},Js=function(t){return new Date(t)},Zs=function(t,e,n){var r=e[n],i=e[n+1];return null!=r&&null!=i?t.subarray(r,i):null},Qs=function(t,e){return function(t,e){return Js(function(t,e){return 864e5*t[e]}(t,e))}(t.values,e)},Xs=function(t,e){return function(t,e){return Js(qs(t,e))}(t.values,2*e)},tc=function(t,e){var n=t.stride;return t.values[n*e]},ec=function(t,e){var n=t.stride;return Nr(t.values[n*e])},nc=function(t,e){var n=t.stride,r=t.values,i=t.type;return Xr.new(r.subarray(n*e,n*(e+1)),i.isSigned)},rc=function(t,e){var n=t.values;return 1e3*qs(n,2*e)},ic=function(t,e){var n=t.values;return qs(n,2*e)},ac=function(t,e){return function(t,e){return t[e+1]/1e3*4294967296+(t[e]>>>0)/1e3}(t.values,2*e)},oc=function(t,e){return function(t,e){return t[e+1]/1e6*4294967296+(t[e]>>>0)/1e6}(t.values,2*e)},uc=function(t,e){return t.values[t.stride*e]},sc=function(t,e){return t.values[t.stride*e]},cc=function(t,e){var n=t.values;return Xr.signed(n.subarray(2*e,2*(e+1)))},fc=function(t,e){var n=t.values;return Xr.signed(n.subarray(2*e,2*(e+1)))},lc=function(t,e){var n=t.typeIdToChildIndex[t.typeIds[e]],r=t.getChildAt(n);return r?r.get(t.valueOffsets[e]):null},hc=function(t,e){var n=t.typeIdToChildIndex[t.typeIds[e]],r=t.getChildAt(n);return r?r.get(e):null},yc=function(t,e){return t.values.subarray(2*e,2*(e+1))},pc=function(t,e){var n=t.values[e],r=new Int32Array(2);return r[0]=n/12|0,r[1]=n%12|0,r};Gs.prototype.visitNull=function(t,e){return null},Gs.prototype.visitBool=function(t,e){var n=t.offset+e;return 0!==(t.values[n>>3]&1<0?0:-1},vc.prototype.visitBool=bc,vc.prototype.visitInt=bc,vc.prototype.visitInt8=bc,vc.prototype.visitInt16=bc,vc.prototype.visitInt32=bc,vc.prototype.visitInt64=bc,vc.prototype.visitUint8=bc,vc.prototype.visitUint16=bc,vc.prototype.visitUint32=bc,vc.prototype.visitUint64=bc,vc.prototype.visitFloat=bc,vc.prototype.visitFloat16=bc,vc.prototype.visitFloat32=bc,vc.prototype.visitFloat64=bc,vc.prototype.visitUtf8=bc,vc.prototype.visitBinary=bc,vc.prototype.visitFixedSizeBinary=bc,vc.prototype.visitDate=bc,vc.prototype.visitDateDay=bc,vc.prototype.visitDateMillisecond=bc,vc.prototype.visitTimestamp=bc,vc.prototype.visitTimestampSecond=bc,vc.prototype.visitTimestampMillisecond=bc,vc.prototype.visitTimestampMicrosecond=bc,vc.prototype.visitTimestampNanosecond=bc,vc.prototype.visitTime=bc,vc.prototype.visitTimeSecond=bc,vc.prototype.visitTimeMillisecond=bc,vc.prototype.visitTimeMicrosecond=bc,vc.prototype.visitTimeNanosecond=bc,vc.prototype.visitDecimal=bc,vc.prototype.visitList=bc,vc.prototype.visitStruct=bc,vc.prototype.visitUnion=bc,vc.prototype.visitDenseUnion=gc,vc.prototype.visitSparseUnion=gc,vc.prototype.visitDictionary=bc,vc.prototype.visitInterval=bc,vc.prototype.visitIntervalDayTime=bc,vc.prototype.visitIntervalYearMonth=bc,vc.prototype.visitFixedSizeList=bc,vc.prototype.visitMap=bc;var mc=new vc,kc=function(t){ot(n,t);var e=yt(n);function n(){return F(this,n),e.apply(this,arguments)}return E(n)}(bn);function wc(t){if(t.nullCount>0)return function(t){var e=dc.getVisitFn(t);return hn(t.nullBitmap,t.offset,t.length,t,(function(t,n,r,i){return 0!==(r&1<0)?t.values.subarray(0,r)[Symbol.iterator]():R.mark((function e(n){var i;return R.wrap((function(e){for(;;)switch(e.prev=e.next){case 0:i=-1;case 1:if(!(++i1?e-1:0),r=1;r0&&(this.get=(e=this.get,function(t){return this.isValid(t)?e.call(this,t):null}),this.set=function(t){return function(e,n){cn(this.nullBitmap,this.offset+e,!(null===n||void 0===n))&&t.call(this,e,n)}}(this.set));var e},Object.keys(Je).map((function(t){return Je[t]})).filter((function(t){return"number"===typeof t})).filter((function(t){return t!==Je.NONE})).forEach((function(t){var e,n=Lc.visit(t);n.prototype.get=(e=dc.getVisitFn(t),function(t){return e(this,t)}),n.prototype.set=Ks(ja.getVisitFn(t)),n.prototype.indexOf=Ks(mc.getVisitFn(t)),n.prototype.toArray=$s(xc.getVisitFn(t)),n.prototype.getByteWidth=function(t){return function(){return t(this.type)}}(Oc.getVisitFn(t)),n.prototype[Symbol.iterator]=$s(_c.getVisitFn(t))}));var Ec=function(t){ot(n,t);var e=yt(n);function n(){var t;F(this,n);for(var r=null,i=arguments.length,a=new Array(i),o=0;o0&&void 0!==arguments[0]?arguments[0]:this._chunks;return new n(this._schema,t)}},{key:"getColumn",value:function(t){return this.getColumnAt(this.getColumnIndex(t))}},{key:"getColumnAt",value:function(t){return this.getChildAt(t)}},{key:"getColumnIndex",value:function(t){return this._schema.fields.findIndex((function(e){return e.name===t}))}},{key:"getChildAt",value:function(t){if(t<0||t>=this.numChildren)return null;var e,n,r=this._schema.fields,i=this._children||(this._children=[]);if(n=i[t])return n;if(e=r[t]){var a=this._chunks.map((function(e){return e.getChildAt(t)})).filter((function(t){return null!=t}));if(a.length>0)return i[t]=new Gi(e,a)}return null}},{key:"serialize",value:function(){var t=!(arguments.length>1&&void 0!==arguments[1])||arguments[1],e=t?Cu:Vu;return e.writeAll(this).toUint8Array(!0)}},{key:"count",value:function(){return this._length}},{key:"select",value:function(){for(var t=this._schema.fields.reduce((function(t,e,n){return t.set(e.name,n)}),new Map),e=arguments.length,n=new Array(e),r=0;r-1}))))}},{key:"selectAt",value:function(){for(var t,e=arguments.length,r=new Array(e),i=0;i3&&void 0!==arguments[3]?arguments[3]:u[r];return void 0===a?e.getColumnAt(r):t.getColumnAt(a)}))),vn(o.map((function(e){return t.getColumnAt(e)})))).filter(Boolean);return zr(n,vn($u(s,c)))}}],[{key:"empty",value:function(){var t=arguments.length>0&&void 0!==arguments[0]?arguments[0]:new ua([]);return new n(t,[])}},{key:"from",value:function(t){if(!t)return n.empty();if("object"===typeof t){var e=Dt(t.values)?function(t){if(t.type instanceof ir)return Ec.fromStruct(Ls.from(t));return null}(t):Lt(t.values)?function(t){if(t.type instanceof ir)return Ls.from(t).then((function(t){return Ec.fromStruct(t)}));return null}(t):null;if(null!==e)return e}var r=jc.from(t);return Ot(r)?L(R.mark((function t(){return R.wrap((function(t){for(;;)switch(t.prev=t.next){case 0:return t.t0=n,t.next=3,r;case 3:return t.t1=t.sent,t.next=6,t.t0.from.call(t.t0,t.t1);case 6:return t.abrupt("return",t.sent);case 7:case"end":return t.stop()}}),t)})))():r.isSync()&&(r=r.open())?r.schema?new n(r.schema,vn(r)):n.empty():function(){var t=L(R.mark((function t(e){var r,i,a,o,u,s,c,f,l;return R.wrap((function(t){for(;;)switch(t.prev=t.next){case 0:return t.next=2,e;case 2:if(r=t.sent,i=r.schema,a=[],!i){t.next=35;break}o=!1,u=!1,t.prev=8,c=P(r);case 10:return t.next=12,c.next();case 12:if(!(o=!(f=t.sent).done)){t.next=18;break}l=f.value,a.push(l);case 15:o=!1,t.next=10;break;case 18:t.next=24;break;case 20:t.prev=20,t.t0=t.catch(8),u=!0,s=t.t0;case 24:if(t.prev=24,t.prev=25,!o||null==c.return){t.next=29;break}return t.next=29,c.return();case 29:if(t.prev=29,!u){t.next=32;break}throw s;case 32:return t.finish(29);case 33:return t.finish(24);case 34:return t.abrupt("return",new n(i,a));case 35:return t.abrupt("return",n.empty());case 36:case"end":return t.stop()}}),t,null,[[8,20,24,34],[25,,29,33]])})));return function(e){return t.apply(this,arguments)}}()(r.open())}},{key:"fromAsync",value:function(){var t=L(R.mark((function t(e){return R.wrap((function(t){for(;;)switch(t.prev=t.next){case 0:return t.next=2,n.from(e);case 2:return t.abrupt("return",t.sent);case 3:case"end":return t.stop()}}),t)})));return function(e){return t.apply(this,arguments)}}()},{key:"fromStruct",value:function(t){return n.new(t.data.childData,t.type.children)}},{key:"new",value:function(){for(var t=arguments.length,e=new Array(t),r=0;r1&&void 0!==arguments[1]?arguments[1]:this._children;return new n(this._schema,t,e)}},{key:"concat",value:function(){for(var t=arguments.length,e=new Array(t),r=0;r-1}))))}},{key:"selectAt",value:function(){for(var t,e=this,r=arguments.length,i=new Array(r),a=0;a0&&this.dictionaries.set(e.id,n),this}}],[{key:"collect",value:function(t){return(new n).visit(t.data,new ir(t.schema.fields)).dictionaries}}]),n}(bn),Vc=R.mark(Zc),jc=function(t,e,n){ot(i,t);var r=yt(i);function i(t){var e;return F(this,i),(e=r.call(this))._impl=t,e}return E(i,[{key:"closed",get:function(){return this._impl.closed}},{key:"schema",get:function(){return this._impl.schema}},{key:"autoDestroy",get:function(){return this._impl.autoDestroy}},{key:"dictionaries",get:function(){return this._impl.dictionaries}},{key:"numDictionaries",get:function(){return this._impl.numDictionaries}},{key:"numRecordBatches",get:function(){return this._impl.numRecordBatches}},{key:"footer",get:function(){return this._impl.isFile()?this._impl.footer:null}},{key:"isSync",value:function(){return this._impl.isSync()}},{key:"isAsync",value:function(){return this._impl.isAsync()}},{key:"isFile",value:function(){return this._impl.isFile()}},{key:"isStream",value:function(){return this._impl.isStream()}},{key:"next",value:function(){return this._impl.next()}},{key:"throw",value:function(t){return this._impl.throw(t)}},{key:"return",value:function(t){return this._impl.return(t)}},{key:"cancel",value:function(){return this._impl.cancel()}},{key:"reset",value:function(t){return this._impl.reset(t),this._DOMStream=void 0,this._nodeStream=void 0,this}},{key:"open",value:function(t){var e=this,n=this._impl.open(t);return Ot(n)?n.then((function(){return e})):this}},{key:"readRecordBatch",value:function(t){return this._impl.isFile()?this._impl.readRecordBatch(t):null}},{key:e,value:function(){return this._impl[Symbol.iterator]()}},{key:n,value:function(){return this._impl[Symbol.asyncIterator]()}},{key:"toDOMStream",value:function(){var t=this;return Be.toDOMStream(this.isSync()?Ve({},Symbol.iterator,(function(){return t})):Ve({},Symbol.asyncIterator,(function(){return t})))}},{key:"toNodeStream",value:function(){var t=this;return Be.toNodeStream(this.isSync()?Ve({},Symbol.iterator,(function(){return t})):Ve({},Symbol.asyncIterator,(function(){return t})),{objectMode:!0})}}],[{key:"throughNode",value:function(t){throw new Error('"throughNode" not available in this environment')}},{key:"throughDOM",value:function(t,e){throw new Error('"throughDOM" not available in this environment')}},{key:"from",value:function(t){return t instanceof i?t:Ft(t)?function(t){return new Rc(new qc(t))}(t):Et(t)?function(t){return ef.apply(this,arguments)}(t):Ot(t)?L(R.mark((function e(){return R.wrap((function(e){for(;;)switch(e.prev=e.next){case 0:return e.t0=i,e.next=3,t;case 3:return e.t1=e.sent,e.next=6,e.t0.from.call(e.t0,e.t1);case 6:return e.abrupt("return",e.sent);case 7:case"end":return e.stop()}}),e)})))():Ut(t)||Ct(t)||jt(t)||Lt(t)?function(t){return tf.apply(this,arguments)}(new Qa(t)):function(t){var e=t.peek(Tu+7&-8);return e&&e.byteLength>=4?Au(e)?new zc(new Kc(t.read())):new Rc(new Hc(t)):new Rc(new Hc(R.mark((function t(){return R.wrap((function(t){for(;;)switch(t.prev=t.next){case 0:case"end":return t.stop()}}),t)}))()))}(new Za(t))}},{key:"readAll",value:function(t){return t instanceof i?t.isSync()?Zc(t):Qc(t):Ft(t)||ArrayBuffer.isView(t)||Dt(t)||Mt(t)?Zc(t):Qc(t)}}]),i}(vt,Symbol.iterator,Symbol.asyncIterator),Rc=function(t,e,n){ot(i,t);var r=yt(i);function i(t){var e;return F(this,i),(e=r.call(this,t))._impl=t,e}return E(i,[{key:e,value:function(){return this._impl[Symbol.iterator]()}},{key:n,value:function(){var t=this;return j(R.mark((function e(){return R.wrap((function(e){for(;;)switch(e.prev=e.next){case 0:return e.delegateYield(Y(P(t[Symbol.iterator]()),C),"t0",1);case 1:case"end":return e.stop()}}),e)})))()}}]),i}(jc,Symbol.iterator,Symbol.asyncIterator),Pc=function(t,e,n){ot(i,t);var r=yt(i);function i(t){var e;return F(this,i),(e=r.call(this,t))._impl=t,e}return E(i,[{key:e,value:function(){throw new Error("AsyncRecordBatchStreamReader is not Iterable")}},{key:n,value:function(){return this._impl[Symbol.asyncIterator]()}}]),i}(jc,Symbol.iterator,Symbol.asyncIterator),zc=function(t){ot(n,t);var e=yt(n);function n(t){var r;return F(this,n),(r=e.call(this,t))._impl=t,r}return E(n)}(Rc),Yc=function(t){ot(n,t);var e=yt(n);function n(t){var r;return F(this,n),(r=e.call(this,t))._impl=t,r}return E(n)}(Pc),Wc=function(){function t(){var e=arguments.length>0&&void 0!==arguments[0]?arguments[0]:new Map;F(this,t),this.closed=!1,this.autoDestroy=!0,this._dictionaryIndex=0,this._recordBatchIndex=0,this.dictionaries=e}return E(t,[{key:"numDictionaries",get:function(){return this._dictionaryIndex}},{key:"numRecordBatches",get:function(){return this._recordBatchIndex}},{key:"isSync",value:function(){return!1}},{key:"isAsync",value:function(){return!1}},{key:"isFile",value:function(){return!1}},{key:"isStream",value:function(){return!1}},{key:"reset",value:function(t){return this._dictionaryIndex=0,this._recordBatchIndex=0,this.schema=t,this.dictionaries=new Map,this}},{key:"_loadRecordBatch",value:function(t,e){return new Uc(this.schema,t.length,this._loadVectors(t,e,this.schema.fields))}},{key:"_loadDictionaryBatch",value:function(t,e){var n=t.id,r=t.isDelta,i=t.data,a=this.dictionaries,o=this.schema,u=a.get(n);if(r||!u){var s=o.dictionaries.get(n);return u&&r?u.concat(qe.new(this._loadVectors(i,e,[s])[0])):qe.new(this._loadVectors(i,e,[s])[0])}return u}},{key:"_loadVectors",value:function(t,e,n){return new co(e,t.nodes,t.buffers,this.dictionaries).visitMany(n)}}]),t}(),Hc=function(t,e){ot(r,t);var n=yt(r);function r(t,e){var i;return F(this,r),(i=n.call(this,e))._reader=Ft(t)?new wu(i._handle=t):new mu(i._handle=t),i}return E(r,[{key:"isSync",value:function(){return!0}},{key:"isStream",value:function(){return!0}},{key:e,value:function(){return this}},{key:"cancel",value:function(){!this.closed&&(this.closed=!0)&&(this.reset()._reader.return(),this._reader=null,this.dictionaries=null)}},{key:"open",value:function(t){return this.closed||(this.autoDestroy=Jc(this,t),this.schema||(this.schema=this._reader.readSchema())||this.cancel()),this}},{key:"throw",value:function(t){return!this.closed&&this.autoDestroy&&(this.closed=!0)?this.reset()._reader.throw(t):pt}},{key:"return",value:function(t){return!this.closed&&this.autoDestroy&&(this.closed=!0)?this.reset()._reader.return(t):pt}},{key:"next",value:function(){if(this.closed)return pt;for(var t,e=this._reader;t=this._readNextMessageAndValidate();)if(t.isSchema())this.reset(t.header());else{if(t.isRecordBatch()){this._recordBatchIndex++;var n=t.header(),r=e.readMessageBody(t.bodyLength);return{done:!1,value:this._loadRecordBatch(n,r)}}if(t.isDictionaryBatch()){this._dictionaryIndex++;var i=t.header(),a=e.readMessageBody(t.bodyLength),o=this._loadDictionaryBatch(i,a);this.dictionaries.set(i.id,o)}}return this.schema&&0===this._recordBatchIndex?(this._recordBatchIndex++,{done:!1,value:new Nc(this.schema)}):this.return()}},{key:"_readNextMessageAndValidate",value:function(t){return this._reader.readMessage(t)}}]),r}(Wc,Symbol.iterator),$c=function(t,e){ot(r,t);var n=yt(r);function r(t,e){var i;return F(this,r),(i=n.call(this,e))._reader=new ku(i._handle=t),i}return E(r,[{key:"isAsync",value:function(){return!0}},{key:"isStream",value:function(){return!0}},{key:e,value:function(){return this}},{key:"cancel",value:function(){var t=L(R.mark((function t(){return R.wrap((function(t){for(;;)switch(t.prev=t.next){case 0:if(this.closed||!(this.closed=!0)){t.next=5;break}return t.next=3,this.reset()._reader.return();case 3:this._reader=null,this.dictionaries=null;case 5:case"end":return t.stop()}}),t,this)})));return function(){return t.apply(this,arguments)}}()},{key:"open",value:function(){var t=L(R.mark((function t(e){return R.wrap((function(t){for(;;)switch(t.prev=t.next){case 0:if(this.closed){t.next=10;break}if(this.autoDestroy=Jc(this,e),t.t0=this.schema,t.t0){t.next=7;break}return t.next=6,this._reader.readSchema();case 6:t.t0=this.schema=t.sent;case 7:if(t.t0){t.next=10;break}return t.next=10,this.cancel();case 10:return t.abrupt("return",this);case 11:case"end":return t.stop()}}),t,this)})));return function(e){return t.apply(this,arguments)}}()},{key:"throw",value:function(){var t=L(R.mark((function t(e){return R.wrap((function(t){for(;;)switch(t.prev=t.next){case 0:if(this.closed||!this.autoDestroy||!(this.closed=!0)){t.next=4;break}return t.next=3,this.reset()._reader.throw(e);case 3:return t.abrupt("return",t.sent);case 4:return t.abrupt("return",pt);case 5:case"end":return t.stop()}}),t,this)})));return function(e){return t.apply(this,arguments)}}()},{key:"return",value:function(){var t=L(R.mark((function t(e){return R.wrap((function(t){for(;;)switch(t.prev=t.next){case 0:if(this.closed||!this.autoDestroy||!(this.closed=!0)){t.next=4;break}return t.next=3,this.reset()._reader.return(e);case 3:return t.abrupt("return",t.sent);case 4:return t.abrupt("return",pt);case 5:case"end":return t.stop()}}),t,this)})));return function(e){return t.apply(this,arguments)}}()},{key:"next",value:function(){var t=L(R.mark((function t(){var e,n,r,i,a,o,u,s;return R.wrap((function(t){for(;;)switch(t.prev=t.next){case 0:if(!this.closed){t.next=2;break}return t.abrupt("return",pt);case 2:n=this._reader;case 3:return t.next=5,this._readNextMessageAndValidate();case 5:if(!(e=t.sent)){t.next=31;break}if(!e.isSchema()){t.next=11;break}return t.next=9,this.reset(e.header());case 9:t.next=29;break;case 11:if(!e.isRecordBatch()){t.next=21;break}return this._recordBatchIndex++,r=e.header(),t.next=16,n.readMessageBody(e.bodyLength);case 16:return i=t.sent,a=this._loadRecordBatch(r,i),t.abrupt("return",{done:!1,value:a});case 21:if(!e.isDictionaryBatch()){t.next=29;break}return this._dictionaryIndex++,o=e.header(),t.next=26,n.readMessageBody(e.bodyLength);case 26:u=t.sent,s=this._loadDictionaryBatch(o,u),this.dictionaries.set(o.id,s);case 29:t.next=3;break;case 31:if(!this.schema||0!==this._recordBatchIndex){t.next=34;break}return this._recordBatchIndex++,t.abrupt("return",{done:!1,value:new Nc(this.schema)});case 34:return t.next=36,this.return();case 36:return t.abrupt("return",t.sent);case 37:case"end":return t.stop()}}),t,this)})));return function(){return t.apply(this,arguments)}}()},{key:"_readNextMessageAndValidate",value:function(){var t=L(R.mark((function t(e){return R.wrap((function(t){for(;;)switch(t.prev=t.next){case 0:return t.next=2,this._reader.readMessage(e);case 2:return t.abrupt("return",t.sent);case 3:case"end":return t.stop()}}),t,this)})));return function(e){return t.apply(this,arguments)}}()}]),r}(Wc,Symbol.asyncIterator),Kc=function(t){ot(n,t);var e=yt(n);function n(t,r){return F(this,n),e.call(this,t instanceof eo?t:new eo(t),r)}return E(n,[{key:"footer",get:function(){return this._footer}},{key:"numDictionaries",get:function(){return this._footer?this._footer.numDictionaries:0}},{key:"numRecordBatches",get:function(){return this._footer?this._footer.numRecordBatches:0}},{key:"isSync",value:function(){return!0}},{key:"isFile",value:function(){return!0}},{key:"open",value:function(t){if(!this.closed&&!this._footer){this.schema=(this._footer=this._readFooter()).schema;var e,r=O(this._footer.dictionaryBatches());try{for(r.s();!(e=r.n()).done;){e.value&&this._readDictionaryBatch(this._dictionaryIndex++)}}catch(i){r.e(i)}finally{r.f()}}return ze(ut(n.prototype),"open",this).call(this,t)}},{key:"readRecordBatch",value:function(t){if(this.closed)return null;this._footer||this.open();var e=this._footer&&this._footer.getRecordBatch(t);if(e&&this._handle.seek(e.offset)){var n=this._reader.readMessage(rn.RecordBatch);if(n&&n.isRecordBatch()){var r=n.header(),i=this._reader.readMessageBody(n.bodyLength);return this._loadRecordBatch(r,i)}}return null}},{key:"_readDictionaryBatch",value:function(t){var e=this._footer&&this._footer.getDictionaryBatch(t);if(e&&this._handle.seek(e.offset)){var n=this._reader.readMessage(rn.DictionaryBatch);if(n&&n.isDictionaryBatch()){var r=n.header(),i=this._reader.readMessageBody(n.bodyLength),a=this._loadDictionaryBatch(r,i);this.dictionaries.set(r.id,a)}}}},{key:"_readFooter",value:function(){var t=this._handle,e=t.size-Bu,n=t.readInt32(e),r=t.readAt(e-n,n);return Ka.decode(r)}},{key:"_readNextMessageAndValidate",value:function(t){if(this._footer||this.open(),this._footer&&this._recordBatchIndex1?r-1:0),a=1;a=4)){t.next=18;break}if(Au(n)){t.next=8;break}t.t1=new Pc(new $c(e)),t.next=15;break;case 8:return t.t2=zc,t.t3=Kc,t.next=12,e.read();case 12:t.t4=t.sent,t.t5=new t.t3(t.t4),t.t1=new t.t2(t.t5);case 15:t.t0=t.t1,t.next=19;break;case 18:t.t0=new Pc(new $c(j(R.mark((function t(){return R.wrap((function(t){for(;;)switch(t.prev=t.next){case 0:case"end":return t.stop()}}),t)})))()));case 19:return t.abrupt("return",t.t0);case 20:case"end":return t.stop()}}),t)})))).apply(this,arguments)}function ef(){return(ef=L(R.mark((function t(e){var n,r,i;return R.wrap((function(t){for(;;)switch(t.prev=t.next){case 0:return t.next=2,e.stat();case 2:if(n=t.sent,r=n.size,i=new no(e,r),!(r>=Ou)){t.next=12;break}return t.t0=Au,t.next=9,i.readAt(0,Tu+7&-8);case 9:if(t.t1=t.sent,!(0,t.t0)(t.t1)){t.next=12;break}return t.abrupt("return",new Yc(new Gc(i)));case 12:return t.abrupt("return",new Pc(new $c(i)));case 13:case"end":return t.stop()}}),t)})))).apply(this,arguments)}var nf=["readableStrategy","writableStrategy","queueingStrategy"];var rf=function(){function t(e){var n,r,i=this;F(this,t),this._numChunks=0,this._finished=!1,this._bufferedSize=0;var a=e.readableStrategy,o=e.writableStrategy,u=e.queueingStrategy,s=void 0===u?"count":u,c=Gu(e,nf);this._controller=null,this._builder=Ir.new(c),this._getSize="bytes"!==s?af:of;var f=Re({},a).highWaterMark,l=void 0===f?"bytes"===s?Math.pow(2,14):1e3:f,h=Re({},o).highWaterMark,y=void 0===h?"bytes"===s?Math.pow(2,14):1e3:h;this.readable=new ReadableStream((Ve(n={},"cancel",(function(){i._builder.clear()})),Ve(n,"pull",(function(t){i._maybeFlush(i._builder,i._controller=t)})),Ve(n,"start",(function(t){i._maybeFlush(i._builder,i._controller=t)})),n),{highWaterMark:l,size:"bytes"!==s?af:of}),this.writable=new WritableStream((Ve(r={},"abort",(function(){i._builder.clear()})),Ve(r,"write",(function(){i._maybeFlush(i._builder,i._controller)})),Ve(r,"close",(function(){i._maybeFlush(i._builder.finish(),i._controller)})),r),{highWaterMark:y,size:function(t){return i._writeValueAndReturnChunkSize(t)}})}return E(t,[{key:"_writeValueAndReturnChunkSize",value:function(t){var e=this._bufferedSize;return this._bufferedSize=this._getSize(this._builder.append(t)),this._bufferedSize-e}},{key:"_maybeFlush",value:function(t,e){null!==e&&(this._bufferedSize>=e.desiredSize&&++this._numChunks&&this._enqueue(e,t.toVector()),t.finished&&((t.length>0||0===this._numChunks)&&++this._numChunks&&this._enqueue(e,t.toVector()),!this._finished&&(this._finished=!0)&&this._enqueue(e,null)))}},{key:"_enqueue",value:function(t,e){this._bufferedSize=0,this._controller=null,null===e?t.close():t.enqueue(e)}}]),t}(),af=function(t){return t.length},of=function(t){return t.byteLength};var uf=function(){function t(){F(this,t)}return E(t,[{key:"eq",value:function(e){return e instanceof t||(e=new sf(e)),new df(this,e)}},{key:"le",value:function(e){return e instanceof t||(e=new sf(e)),new vf(this,e)}},{key:"ge",value:function(e){return e instanceof t||(e=new sf(e)),new bf(this,e)}},{key:"lt",value:function(t){return new gf(this.ge(t))}},{key:"gt",value:function(t){return new gf(this.le(t))}},{key:"ne",value:function(t){return new gf(this.eq(t))}}]),t}(),sf=function(t){ot(n,t);var e=yt(n);function n(t){var r;return F(this,n),(r=e.call(this)).v=t,r}return E(n)}(uf),cf=function(t){ot(n,t);var e=yt(n);function n(t){var r;return F(this,n),(r=e.call(this)).name=t,r}return E(n,[{key:"bind",value:function(t){if(!this.colidx){this.colidx=-1;for(var e=t.schema.fields,n=-1;++n=n.v;return function(){return r}}},{key:"_bindColCol",value:function(t,e,n){var r=e.bind(t),i=n.bind(t);return function(t,e){return r(t,e)>=i(t,e)}}},{key:"_bindColLit",value:function(t,e,n){var r=e.bind(t);return function(t,e){return r(t,e)>=n.v}}},{key:"_bindLitCol",value:function(t,e,n){var r=n.bind(t);return function(t,n){return e.v>=r(t,n)}}}]),n}(lf),gf=function(t){ot(n,t);var e=yt(n);function n(t){var r;return F(this,n),(r=e.call(this)).child=t,r}return E(n,[{key:"bind",value:function(t){var e=this.child.bind(t);return function(t,n){return!e(t,n)}}}]),n}(ff);Ec.prototype.countBy=function(t){return new mf(this.chunks).countBy(t)},Ec.prototype.scan=function(t,e){return new mf(this.chunks).scan(t,e)},Ec.prototype.scanReverse=function(t,e){return new mf(this.chunks).scanReverse(t,e)},Ec.prototype.filter=function(t){return new mf(this.chunks).filter(t)};var mf=function(t){ot(n,t);var e=yt(n);function n(){return F(this,n),e.apply(this,arguments)}return E(n,[{key:"filter",value:function(t){return new wf(this.chunks,t)}},{key:"scan",value:function(t,e){for(var n=this.chunks,r=n.length,i=-1;++i=0;){var i=n[r];e&&e(i);for(var a=i.length;--a>=0;)t(a,i)}}},{key:"countBy",value:function(t){var e=this.chunks,n=e.length,r="string"===typeof t?new cf(t):t;r.bind(e[n-1]);var i=r.vector;if(!Fn.isDictionary(i.type))throw new Error("countBy currently only supports dictionary-encoded columns");for(var a=Math.ceil(Math.log(i.length)/Math.log(256)),o=new(4==a?Uint32Array:a>=2?Uint16Array:Uint8Array)(i.dictionary.length),u=-1;++u=0;)for(var i=n[r],a=this._predicate.bind(i),o=!1,u=i.length;--u>=0;)a(u,i)&&(e&&!o&&(e(i),o=!0),t(u,i))}},{key:"count",value:function(){for(var t=0,e=this._chunks,n=e.length,r=-1;++r=2?Uint16Array:Uint8Array)(i.dictionary.length),u=-1;++u=i.headerRows&&e=i.headerColumns;if(n){var o=["blank"];return e>0&&o.push("level"+t),{type:"blank",classNames:o.join(" "),content:""}}if(a)return{type:"columns",classNames:(o=["col_heading","level"+t,"col"+(s=e-i.headerColumns)]).join(" "),content:i.getContent(i.columnsTable,s,t)};if(r){o=["row_heading","level"+e,"row"+(u=t-i.headerRows)];return{type:"index",id:"T_"+i.uuid+"level"+e+"_row"+u,classNames:o.join(" "),content:i.getContent(i.indexTable,u,e)}}o=["data","row"+(u=t-i.headerRows),"col"+(s=e-i.headerColumns)];var u,s,c=i.styler?i.getContent(i.styler.displayValuesTable,u,s):i.getContent(i.dataTable,u,s);return{type:"data",id:"T_"+i.uuid+"row"+u+"_col"+s,classNames:o.join(" "),content:c}},this.getContent=function(t,e,n){var r=t.getColumnAt(n);return null===r?"":i.getColumnTypeId(t,n)===Je.Timestamp?i.nanosToDate(r.get(e)):r.get(e)},this.dataTable=Ec.from(t),this.indexTable=Ec.from(e),this.columnsTable=Ec.from(n),this.styler=r?{caption:r.caption,displayValuesTable:Ec.from(r.displayValues),styles:r.styles,uuid:r.uuid}:void 0}return Object.defineProperty(t.prototype,"rows",{get:function(){return this.indexTable.length+this.columnsTable.numCols},enumerable:!0,configurable:!0}),Object.defineProperty(t.prototype,"columns",{get:function(){return this.indexTable.numCols+this.columnsTable.length},enumerable:!0,configurable:!0}),Object.defineProperty(t.prototype,"headerRows",{get:function(){return this.rows-this.dataRows},enumerable:!0,configurable:!0}),Object.defineProperty(t.prototype,"headerColumns",{get:function(){return this.columns-this.dataColumns},enumerable:!0,configurable:!0}),Object.defineProperty(t.prototype,"dataRows",{get:function(){return this.dataTable.length},enumerable:!0,configurable:!0}),Object.defineProperty(t.prototype,"dataColumns",{get:function(){return this.dataTable.numCols},enumerable:!0,configurable:!0}),Object.defineProperty(t.prototype,"uuid",{get:function(){return this.styler&&this.styler.uuid},enumerable:!0,configurable:!0}),Object.defineProperty(t.prototype,"caption",{get:function(){return this.styler&&this.styler.caption},enumerable:!0,configurable:!0}),Object.defineProperty(t.prototype,"styles",{get:function(){return this.styler&&this.styler.styles},enumerable:!0,configurable:!0}),Object.defineProperty(t.prototype,"table",{get:function(){return this.dataTable},enumerable:!0,configurable:!0}),Object.defineProperty(t.prototype,"index",{get:function(){return this.indexTable},enumerable:!0,configurable:!0}),Object.defineProperty(t.prototype,"columnTable",{get:function(){return this.columnsTable},enumerable:!0,configurable:!0}),t.prototype.serialize=function(){return{data:this.dataTable.serialize(),index:this.indexTable.serialize(),columns:this.columnsTable.serialize()}},t.prototype.getColumnTypeId=function(t,e){return t.schema.fields[e].type.typeId},t.prototype.nanosToDate=function(t){return new Date(t/1e6)},t}(),Sf=function(){return Sf=Object.assign||function(t){for(var e,n=1,r=arguments.length;n0?t.argsDataframeToObject(e.dfs):{};n=Sf(Sf({},n),r);var i=Boolean(e.disabled),a=e.theme;a&&Af(a);var o={disabled:i,args:n,theme:a},u=new CustomEvent(t.RENDER_EVENT,{detail:o});t.events.dispatchEvent(u)},t.argsDataframeToObject=function(e){var n=e.map((function(e){var n=e.key,r=e.value;return[n,t.toArrowTable(r)]}));return Object.fromEntries(n)},t.toArrowTable=function(t){var e=t.data,n=e.data,r=e.index,i=e.columns,a=e.styler;return new If(n,r,i,a)},t.sendBackMsg=function(t,e){window.parent.postMessage(Sf({isStreamlitMessage:!0,type:t},e),"*")},t}(),Af=function(t){var e=document.createElement("style");document.head.appendChild(e),e.innerHTML="\n :root {\n --primary-color: "+t.primaryColor+";\n --background-color: "+t.backgroundColor+";\n --secondary-background-color: "+t.secondaryBackgroundColor+";\n --text-color: "+t.textColor+";\n --font: "+t.font+";\n }\n\n body {\n background-color: var(--background-color);\n color: var(--text-color);\n }\n "};var Tf=function(){var t=function(e,n){return t=Object.setPrototypeOf||{__proto__:[]}instanceof Array&&function(t,e){t.__proto__=e}||function(t,e){for(var n in e)e.hasOwnProperty(n)&&(t[n]=e[n])},t(e,n)};return function(e,n){function r(){this.constructor=e}t(e,n),e.prototype=null===n?Object.create(n):(r.prototype=n.prototype,new r)}}();!function(t){function e(){return null!==t&&t.apply(this,arguments)||this}Tf(e,t),e.prototype.componentDidMount=function(){xf.setFrameHeight()},e.prototype.componentDidUpdate=function(){xf.setFrameHeight()}}(f.a.PureComponent)},function(t,e,n){"use strict";var r=n(6),i={childContextTypes:!0,contextType:!0,contextTypes:!0,defaultProps:!0,displayName:!0,getDefaultProps:!0,getDerivedStateFromError:!0,getDerivedStateFromProps:!0,mixins:!0,propTypes:!0,type:!0},a={name:!0,length:!0,prototype:!0,caller:!0,callee:!0,arguments:!0,arity:!0},o={$$typeof:!0,compare:!0,defaultProps:!0,displayName:!0,propTypes:!0,type:!0},u={};function s(t){return r.isMemo(t)?o:u[t.$$typeof]||i}u[r.ForwardRef]={$$typeof:!0,render:!0,defaultProps:!0,displayName:!0,propTypes:!0},u[r.Memo]=o;var c=Object.defineProperty,f=Object.getOwnPropertyNames,l=Object.getOwnPropertySymbols,h=Object.getOwnPropertyDescriptor,y=Object.getPrototypeOf,p=Object.prototype;t.exports=function t(e,n,r){if("string"!==typeof n){if(p){var i=y(n);i&&i!==p&&t(e,i,r)}var o=f(n);l&&(o=o.concat(l(n)));for(var u=s(e),d=s(n),v=0;vD.length&&D.push(t)}function M(t,e,n,r){var i=typeof t;"undefined"!==i&&"boolean"!==i||(t=null);var u=!1;if(null===t)u=!0;else switch(i){case"string":case"number":u=!0;break;case"object":switch(t.$$typeof){case a:case o:u=!0}}if(u)return n(r,t,""===e?"."+U(t,0):e),1;if(u=0,e=""===e?".":e+":",Array.isArray(t))for(var s=0;s=0;--a){var o=this.tryEntries[a],u=o.completion;if("root"===o.tryLoc)return i("end");if(o.tryLoc<=this.prev){var s=r.call(o,"catchLoc"),c=r.call(o,"finallyLoc");if(s&&c){if(this.prev=0;--n){var i=this.tryEntries[n];if(i.tryLoc<=this.prev&&r.call(i,"finallyLoc")&&this.prev=0;--e){var n=this.tryEntries[e];if(n.finallyLoc===t)return this.complete(n.completion,n.afterLoc),T(n),d}},catch:function(t){for(var e=this.tryEntries.length-1;e>=0;--e){var n=this.tryEntries[e];if(n.tryLoc===t){var r=n.completion;if("throw"===r.type){var i=r.arg;T(n)}return i}}throw new Error("illegal catch attempt")},delegateYield:function(t,n,r){return this.delegate={iterator:O(t),resultName:n,nextLoc:r},"next"===this.method&&(this.arg=e),d}},t}(t.exports);try{regeneratorRuntime=r}catch(i){"object"===typeof globalThis?globalThis.regeneratorRuntime=r:Function("r","regeneratorRuntime = r")(r)}}]]); -//# sourceMappingURL=2.422ca0c4.chunk.js.map \ No newline at end of file diff --git a/spaces/abcde1234www/ChatGPT-prompt-generator/README.md b/spaces/abcde1234www/ChatGPT-prompt-generator/README.md deleted file mode 100644 index 9765db2c80dd4c4b938060743922163b1718e003..0000000000000000000000000000000000000000 --- a/spaces/abcde1234www/ChatGPT-prompt-generator/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: ChatGPT Prompt Generator -emoji: 👨🏻‍🎤 -colorFrom: purple -colorTo: pink -sdk: gradio -sdk_version: 3.16.2 -app_file: app.py -pinned: false -license: apache-2.0 -duplicated_from: merve/ChatGPT-prompt-generator ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/abdvl/datahub_qa_bot/docs/authentication/guides/sso/configure-oidc-react.md b/spaces/abdvl/datahub_qa_bot/docs/authentication/guides/sso/configure-oidc-react.md deleted file mode 100644 index b7efb94f842d62be937abb3e873a0f0c3384f289..0000000000000000000000000000000000000000 --- a/spaces/abdvl/datahub_qa_bot/docs/authentication/guides/sso/configure-oidc-react.md +++ /dev/null @@ -1,237 +0,0 @@ -# Overview - -The DataHub React application supports OIDC authentication built on top of the [Pac4j Play](https://github.com/pac4j/play-pac4j) library. -This enables operators of DataHub to integrate with 3rd party identity providers like Okta, Google, Keycloak, & more to authenticate their users. - -When configured, OIDC auth will be enabled between clients of the DataHub UI & `datahub-frontend` server. Beyond this point is considered -to be a secure environment and as such authentication is validated & enforced only at the "front door" inside datahub-frontend. - -:::caution -Even if OIDC is configured the root user can still login without OIDC by going -to `/login` URL endpoint. It is recommended that you don't use the default -credentials by mounting a different file in the front end container. To do this -please see [this guide](../jaas.md) to mount a custom user.props file for a JAAS authenticated deployment. -::: - -## Provider-Specific Guides - -1. [Configuring OIDC using Google](configure-oidc-react-google.md) -2. [Configuring OIDC using Okta](configure-oidc-react-okta.md) -3. [Configuring OIDC using Azure](configure-oidc-react-azure.md) - -## Configuring OIDC in React - -### 1. Register an app with your Identity Provider - -To configure OIDC in React, you will most often need to register yourself as a client with your identity provider (Google, Okta, etc). Each provider may -have their own instructions. Provided below are links to examples for Okta, Google, Azure AD, & Keycloak. - -- [Registering an App in Okta](https://developer.okta.com/docs/guides/add-an-external-idp/apple/register-app-in-okta/) -- [OpenID Connect in Google Identity](https://developers.google.com/identity/protocols/oauth2/openid-connect) -- [OpenID Connect authentication with Azure Active Directory](https://docs.microsoft.com/en-us/azure/active-directory/fundamentals/auth-oidc) -- [Keycloak - Securing Applications and Services Guide](https://www.keycloak.org/docs/latest/securing_apps/) - -During the registration process, you'll need to provide a login redirect URI to the identity provider. This tells the identity provider -where to redirect to once they've authenticated the end user. - -By default, the URL will be constructed as follows: - -> "http://your-datahub-domain.com/callback/oidc" - -For example, if you're hosted DataHub at `datahub.myorg.com`, this -value would be `http://datahub.myorg.com/callback/oidc`. For testing purposes you can also specify localhost as the domain name -directly: `http://localhost:9002/callback/oidc` - -The goal of this step should be to obtain the following values, which will need to be configured before deploying DataHub: - -1. **Client ID** - A unique identifier for your application with the identity provider -2. **Client Secret** - A shared secret to use for exchange between you and your identity provider -3. **Discovery URL** - A URL where the OIDC API of your identity provider can be discovered. This should suffixed by - `.well-known/openid-configuration`. Sometimes, identity providers will not explicitly include this URL in their setup guides, though - this endpoint *will* exist as per the OIDC specification. For more info see http://openid.net/specs/openid-connect-discovery-1_0.html. - - -### 2. Configure DataHub Frontend Server - -The second step to enabling OIDC involves configuring `datahub-frontend` to enable OIDC authentication with your Identity Provider. - -To do so, you must update the `datahub-frontend` [docker.env](../../../../docker/datahub-frontend/env/docker.env) file with the -values received from your identity provider: - -``` -# Required Configuration Values: -AUTH_OIDC_ENABLED=true -AUTH_OIDC_CLIENT_ID=your-client-id -AUTH_OIDC_CLIENT_SECRET=your-client-secret -AUTH_OIDC_DISCOVERY_URI=your-provider-discovery-url -AUTH_OIDC_BASE_URL=your-datahub-url -``` - -- `AUTH_OIDC_ENABLED`: Enable delegating authentication to OIDC identity provider -- `AUTH_OIDC_CLIENT_ID`: Unique client id received from identity provider -- `AUTH_OIDC_CLIENT_SECRET`: Unique client secret received from identity provider -- `AUTH_OIDC_DISCOVERY_URI`: Location of the identity provider OIDC discovery API. Suffixed with `.well-known/openid-configuration` -- `AUTH_OIDC_BASE_URL`: The base URL of your DataHub deployment, e.g. https://yourorgdatahub.com (prod) or http://localhost:9002 (testing) - -Providing these configs will cause DataHub to delegate authentication to your identity -provider, requesting the "oidc email profile" scopes and parsing the "preferred_username" claim from -the authenticated profile as the DataHub CorpUser identity. - - -> By default, the login callback endpoint exposed by DataHub will be located at `${AUTH_OIDC_BASE_URL}/callback/oidc`. This must **exactly** match the login redirect URL you've registered with your identity provider in step 1. - -In kubernetes, you can add the above env variables in the values.yaml as follows. - -```yaml -datahub-frontend: - ... - extraEnvs: - - name: AUTH_OIDC_ENABLED - value: "true" - - name: AUTH_OIDC_CLIENT_ID - value: your-client-id - - name: AUTH_OIDC_CLIENT_SECRET - value: your-client-secret - - name: AUTH_OIDC_DISCOVERY_URI - value: your-provider-discovery-url - - name: AUTH_OIDC_BASE_URL - value: your-datahub-url -``` - -You can also package OIDC client secrets into a k8s secret by running - -```kubectl create secret generic datahub-oidc-secret --from-literal=secret=<>``` - -Then set the secret env as follows. - -```yaml - - name: AUTH_OIDC_CLIENT_SECRET - valueFrom: - secretKeyRef: - name: datahub-oidc-secret - key: secret -``` - - -#### Advanced - -You can optionally customize the flow further using advanced configurations. These allow -you to specify the OIDC scopes requested, how the DataHub username is parsed from the claims returned by the identity provider, and how users and groups are extracted and provisioned from the OIDC claim set. - -``` -# Optional Configuration Values: -AUTH_OIDC_USER_NAME_CLAIM=your-custom-claim -AUTH_OIDC_USER_NAME_CLAIM_REGEX=your-custom-regex -AUTH_OIDC_SCOPE=your-custom-scope -AUTH_OIDC_CLIENT_AUTHENTICATION_METHOD=authentication-method -``` - -- `AUTH_OIDC_USER_NAME_CLAIM`: The attribute that will contain the username used on the DataHub platform. By default, this is "email" provided - as part of the standard `email` scope. -- `AUTH_OIDC_USER_NAME_CLAIM_REGEX`: A regex string used for extracting the username from the userNameClaim attribute. For example, if - the userNameClaim field will contain an email address, and we want to omit the domain name suffix of the email, we can specify a custom - regex to do so. (e.g. `([^@]+)`) -- `AUTH_OIDC_SCOPE`: a string representing the scopes to be requested from the identity provider, granted by the end user. For more info, - see [OpenID Connect Scopes](https://auth0.com/docs/scopes/openid-connect-scopes). -- `AUTH_OIDC_CLIENT_AUTHENTICATION_METHOD`: a string representing the token authentication method to use with the identity provider. Default value - is `client_secret_basic`, which uses HTTP Basic authentication. Another option is `client_secret_post`, which includes the client_id and secret_id - as form parameters in the HTTP POST request. For more info, see [OAuth 2.0 Client Authentication](https://darutk.medium.com/oauth-2-0-client-authentication-4b5f929305d4) - -Additional OIDC Options: - -- `AUTH_OIDC_PREFERRED_JWS_ALGORITHM` - Can be used to select a preferred signing algorithm for id tokens. Examples include: `RS256` or `HS256`. If -your IdP includes `none` before `RS256`/`HS256` in the list of signing algorithms, then this value **MUST** be set. - -##### User & Group Provisioning (JIT Provisioning) - -By default, DataHub will optimistically attempt to provision users and groups that do not already exist at the time of login. -For users, we extract information like first name, last name, display name, & email to construct a basic user profile. If a groups claim is present, -we simply extract their names. - -The default provisioning behavior can be customized using the following configs. - -``` -# User and groups provisioning -AUTH_OIDC_JIT_PROVISIONING_ENABLED=true -AUTH_OIDC_PRE_PROVISIONING_REQUIRED=false -AUTH_OIDC_EXTRACT_GROUPS_ENABLED=false -AUTH_OIDC_GROUPS_CLAIM= -``` - -- `AUTH_OIDC_JIT_PROVISIONING_ENABLED`: Whether DataHub users & groups should be provisioned on login if they do not exist. Defaults to true. -- `AUTH_OIDC_PRE_PROVISIONING_REQUIRED`: Whether the user should already exist in DataHub when they login, failing login if they are not. This is appropriate for situations in which users and groups are batch ingested and tightly controlled inside your environment. Defaults to false. -- `AUTH_OIDC_EXTRACT_GROUPS_ENABLED`: Only applies if `AUTH_OIDC_JIT_PROVISIONING_ENABLED` is set to true. This determines whether we should attempt to extract a list of group names from a particular claim in the OIDC attributes. Note that if this is enabled, each login will re-sync group membership with the groups in your Identity Provider, clearing the group membership that has been assigned through the DataHub UI. Enable with care! Defaults to false. -- `AUTH_OIDC_GROUPS_CLAIM`: Only applies if `AUTH_OIDC_EXTRACT_GROUPS_ENABLED` is set to true. This determines which OIDC claims will contain a list of string group names. Accepts multiple claim names with comma-separated values. I.e: `groups, teams, departments`. Defaults to 'groups'. - - -Once configuration has been updated, `datahub-frontend-react` will need to be restarted to pick up the new environment variables: - -``` -docker-compose -p datahub -f docker-compose.yml -f docker-compose.override.yml up datahub-frontend-react -``` - ->Note that by default, enabling OIDC will *not* disable the dummy JAAS authentication path, which can be reached at the `/login` -route of the React app. To disable this authentication path, additionally specify the following config: -> `AUTH_JAAS_ENABLED=false` - -### Summary - -Once configured, deploying the `datahub-frontend-react` container will enable an indirect authentication flow in which DataHub delegates -authentication to the specified identity provider. - -Once a user is authenticated by the identity provider, DataHub will extract a username from the provided claims -and grant DataHub access to the user by setting a pair of session cookies. - -A brief summary of the steps that occur when the user navigates to the React app are as follows: - -1. A `GET` to the `/authenticate` endpoint in `datahub-frontend` server is initiated -2. The `/authenticate` attempts to authenticate the request via session cookies -3. If auth fails, the server issues a redirect to the Identity Provider's login experience -4. The user logs in with the Identity Provider -5. The Identity Provider authenticates the user and redirects back to DataHub's registered login redirect URL, providing an authorization code which - can be used to retrieve information on behalf of the authenticated user -6. DataHub fetches the authenticated user's profile and extracts a username to identify the user on DataHub (eg. urn:li:corpuser:username) -7. DataHub sets session cookies for the newly authenticated user -8. DataHub redirects the user to the homepage ("/") - -## FAQ - -**No users can log in. Instead, I get redirected to the login page with an error. What do I do?** - -This can occur for a variety of reasons, but most often it is due to misconfiguration of Single-Sign On, either on the DataHub -side or on the Identity Provider side. - -First, verify that all values are consistent across them (e.g. the host URL where DataHub is deployed), and that no values -are misspelled (client id, client secret). - -Next, verify that the scopes requested are supported by your Identity Provider -and that the claim (i.e. attribute) DataHub uses for uniquely identifying the user is supported by your Identity Provider (refer to Identity Provider OpenID Connect documentation). By default, this claim is `email`. - -Then, make sure the Discovery URI you've configured (`AUTH_OIDC_DISCOVERY_URI`) is accessible where the datahub-frontend container is running. You -can do this by issuing a basic CURL to the address (**Pro-Tip**: you may also visit the address in your browser to check more specific details about your Identity Provider). - -Finally, check the container logs for the `datahub-frontend` container. This should hopefully provide some additional context -around why exactly the login handoff is not working. - -If all else fails, feel free to reach out to the DataHub Community on Slack for -real-time support - - - -**I'm seeing an error in the `datahub-frontend` logs when a user tries to login** -```shell -Caused by: java.lang.RuntimeException: Failed to resolve user name claim from profile provided by Identity Provider. Missing attribute. Attribute: 'email', Regex: '(.*)', Profile: { ... -``` -**what do I do?** - -This indicates that your Identity Provider does not provide the claim with name 'email', which DataHub -uses by default to uniquely identify users within your organization. - -To fix this, you may need to - -1. Change the claim that is used as the unique user identifier to something else by changing the `AUTH_OIDC_USER_NAME_CLAIM` (e.g. to "name" or "preferred_username") _OR_ -2. Change the environment variable `AUTH_OIDC_SCOPE` to include the scope required to retrieve the claim with name "email" - -For the `datahub-frontend` container / pod. - -**Pro-Tip**: Check the documentation for your Identity Provider to learn more about the scope claims supported. diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmcv/ops/roi_align_rotated.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmcv/ops/roi_align_rotated.py deleted file mode 100644 index 0ce4961a3555d4da8bc3e32f1f7d5ad50036587d..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmcv/ops/roi_align_rotated.py +++ /dev/null @@ -1,177 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch.nn as nn -from torch.autograd import Function - -from ..utils import ext_loader - -ext_module = ext_loader.load_ext( - '_ext', ['roi_align_rotated_forward', 'roi_align_rotated_backward']) - - -class RoIAlignRotatedFunction(Function): - - @staticmethod - def symbolic(g, features, rois, out_size, spatial_scale, sample_num, - aligned, clockwise): - if isinstance(out_size, int): - out_h = out_size - out_w = out_size - elif isinstance(out_size, tuple): - assert len(out_size) == 2 - assert isinstance(out_size[0], int) - assert isinstance(out_size[1], int) - out_h, out_w = out_size - else: - raise TypeError( - '"out_size" must be an integer or tuple of integers') - return g.op( - 'mmcv::MMCVRoIAlignRotated', - features, - rois, - output_height_i=out_h, - output_width_i=out_h, - spatial_scale_f=spatial_scale, - sampling_ratio_i=sample_num, - aligned_i=aligned, - clockwise_i=clockwise) - - @staticmethod - def forward(ctx, - features, - rois, - out_size, - spatial_scale, - sample_num=0, - aligned=True, - clockwise=False): - if isinstance(out_size, int): - out_h = out_size - out_w = out_size - elif isinstance(out_size, tuple): - assert len(out_size) == 2 - assert isinstance(out_size[0], int) - assert isinstance(out_size[1], int) - out_h, out_w = out_size - else: - raise TypeError( - '"out_size" must be an integer or tuple of integers') - ctx.spatial_scale = spatial_scale - ctx.sample_num = sample_num - ctx.aligned = aligned - ctx.clockwise = clockwise - ctx.save_for_backward(rois) - ctx.feature_size = features.size() - - batch_size, num_channels, data_height, data_width = features.size() - num_rois = rois.size(0) - - output = features.new_zeros(num_rois, num_channels, out_h, out_w) - ext_module.roi_align_rotated_forward( - features, - rois, - output, - pooled_height=out_h, - pooled_width=out_w, - spatial_scale=spatial_scale, - sample_num=sample_num, - aligned=aligned, - clockwise=clockwise) - return output - - @staticmethod - def backward(ctx, grad_output): - feature_size = ctx.feature_size - spatial_scale = ctx.spatial_scale - aligned = ctx.aligned - clockwise = ctx.clockwise - sample_num = ctx.sample_num - rois = ctx.saved_tensors[0] - assert feature_size is not None - batch_size, num_channels, data_height, data_width = feature_size - - out_w = grad_output.size(3) - out_h = grad_output.size(2) - - grad_input = grad_rois = None - - if ctx.needs_input_grad[0]: - grad_input = rois.new_zeros(batch_size, num_channels, data_height, - data_width) - ext_module.roi_align_rotated_backward( - grad_output.contiguous(), - rois, - grad_input, - pooled_height=out_h, - pooled_width=out_w, - spatial_scale=spatial_scale, - sample_num=sample_num, - aligned=aligned, - clockwise=clockwise) - return grad_input, grad_rois, None, None, None, None, None - - -roi_align_rotated = RoIAlignRotatedFunction.apply - - -class RoIAlignRotated(nn.Module): - """RoI align pooling layer for rotated proposals. - - It accepts a feature map of shape (N, C, H, W) and rois with shape - (n, 6) with each roi decoded as (batch_index, center_x, center_y, - w, h, angle). The angle is in radian. - - Args: - out_size (tuple): h, w - spatial_scale (float): scale the input boxes by this number - sample_num (int): number of inputs samples to take for each - output sample. 0 to take samples densely for current models. - aligned (bool): if False, use the legacy implementation in - MMDetection. If True, align the results more perfectly. - Default: True. - clockwise (bool): If True, the angle in each proposal follows a - clockwise fashion in image space, otherwise, the angle is - counterclockwise. Default: False. - - Note: - The implementation of RoIAlign when aligned=True is modified from - https://github.com/facebookresearch/detectron2/ - - The meaning of aligned=True: - - Given a continuous coordinate c, its two neighboring pixel - indices (in our pixel model) are computed by floor(c - 0.5) and - ceil(c - 0.5). For example, c=1.3 has pixel neighbors with discrete - indices [0] and [1] (which are sampled from the underlying signal - at continuous coordinates 0.5 and 1.5). But the original roi_align - (aligned=False) does not subtract the 0.5 when computing - neighboring pixel indices and therefore it uses pixels with a - slightly incorrect alignment (relative to our pixel model) when - performing bilinear interpolation. - - With `aligned=True`, - we first appropriately scale the ROI and then shift it by -0.5 - prior to calling roi_align. This produces the correct neighbors; - - The difference does not make a difference to the model's - performance if ROIAlign is used together with conv layers. - """ - - def __init__(self, - out_size, - spatial_scale, - sample_num=0, - aligned=True, - clockwise=False): - super(RoIAlignRotated, self).__init__() - - self.out_size = out_size - self.spatial_scale = float(spatial_scale) - self.sample_num = int(sample_num) - self.aligned = aligned - self.clockwise = clockwise - - def forward(self, features, rois): - return RoIAlignRotatedFunction.apply(features, rois, self.out_size, - self.spatial_scale, - self.sample_num, self.aligned, - self.clockwise) diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmcv/runner/hooks/sync_buffer.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmcv/runner/hooks/sync_buffer.py deleted file mode 100644 index 6376b7ff894280cb2782243b25e8973650591577..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmcv/runner/hooks/sync_buffer.py +++ /dev/null @@ -1,22 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from ..dist_utils import allreduce_params -from .hook import HOOKS, Hook - - -@HOOKS.register_module() -class SyncBuffersHook(Hook): - """Synchronize model buffers such as running_mean and running_var in BN at - the end of each epoch. - - Args: - distributed (bool): Whether distributed training is used. It is - effective only for distributed training. Defaults to True. - """ - - def __init__(self, distributed=True): - self.distributed = distributed - - def after_epoch(self, runner): - """All-reduce model buffers at the end of each epoch.""" - if self.distributed: - allreduce_params(runner.model.buffers()) diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/models/backbones/detectors_resnext.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/models/backbones/detectors_resnext.py deleted file mode 100644 index 57d032fe37ed82d5ba24e761bdc014cc0ee5ac64..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/models/backbones/detectors_resnext.py +++ /dev/null @@ -1,122 +0,0 @@ -import math - -from mmcv.cnn import build_conv_layer, build_norm_layer - -from ..builder import BACKBONES -from .detectors_resnet import Bottleneck as _Bottleneck -from .detectors_resnet import DetectoRS_ResNet - - -class Bottleneck(_Bottleneck): - expansion = 4 - - def __init__(self, - inplanes, - planes, - groups=1, - base_width=4, - base_channels=64, - **kwargs): - """Bottleneck block for ResNeXt. - - If style is "pytorch", the stride-two layer is the 3x3 conv layer, if - it is "caffe", the stride-two layer is the first 1x1 conv layer. - """ - super(Bottleneck, self).__init__(inplanes, planes, **kwargs) - - if groups == 1: - width = self.planes - else: - width = math.floor(self.planes * - (base_width / base_channels)) * groups - - self.norm1_name, norm1 = build_norm_layer( - self.norm_cfg, width, postfix=1) - self.norm2_name, norm2 = build_norm_layer( - self.norm_cfg, width, postfix=2) - self.norm3_name, norm3 = build_norm_layer( - self.norm_cfg, self.planes * self.expansion, postfix=3) - - self.conv1 = build_conv_layer( - self.conv_cfg, - self.inplanes, - width, - kernel_size=1, - stride=self.conv1_stride, - bias=False) - self.add_module(self.norm1_name, norm1) - fallback_on_stride = False - self.with_modulated_dcn = False - if self.with_dcn: - fallback_on_stride = self.dcn.pop('fallback_on_stride', False) - if self.with_sac: - self.conv2 = build_conv_layer( - self.sac, - width, - width, - kernel_size=3, - stride=self.conv2_stride, - padding=self.dilation, - dilation=self.dilation, - groups=groups, - bias=False) - elif not self.with_dcn or fallback_on_stride: - self.conv2 = build_conv_layer( - self.conv_cfg, - width, - width, - kernel_size=3, - stride=self.conv2_stride, - padding=self.dilation, - dilation=self.dilation, - groups=groups, - bias=False) - else: - assert self.conv_cfg is None, 'conv_cfg must be None for DCN' - self.conv2 = build_conv_layer( - self.dcn, - width, - width, - kernel_size=3, - stride=self.conv2_stride, - padding=self.dilation, - dilation=self.dilation, - groups=groups, - bias=False) - - self.add_module(self.norm2_name, norm2) - self.conv3 = build_conv_layer( - self.conv_cfg, - width, - self.planes * self.expansion, - kernel_size=1, - bias=False) - self.add_module(self.norm3_name, norm3) - - -@BACKBONES.register_module() -class DetectoRS_ResNeXt(DetectoRS_ResNet): - """ResNeXt backbone for DetectoRS. - - Args: - groups (int): The number of groups in ResNeXt. - base_width (int): The base width of ResNeXt. - """ - - arch_settings = { - 50: (Bottleneck, (3, 4, 6, 3)), - 101: (Bottleneck, (3, 4, 23, 3)), - 152: (Bottleneck, (3, 8, 36, 3)) - } - - def __init__(self, groups=1, base_width=4, **kwargs): - self.groups = groups - self.base_width = base_width - super(DetectoRS_ResNeXt, self).__init__(**kwargs) - - def make_res_layer(self, **kwargs): - return super().make_res_layer( - groups=self.groups, - base_width=self.base_width, - base_channels=self.base_channels, - **kwargs) diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/models/roi_heads/test_mixins.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/models/roi_heads/test_mixins.py deleted file mode 100644 index 78a092a431aa884ab7dfd08346f79a4ccf8303bf..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/models/roi_heads/test_mixins.py +++ /dev/null @@ -1,348 +0,0 @@ -import logging -import sys - -import torch - -from mmdet.core import (bbox2roi, bbox_mapping, merge_aug_bboxes, - merge_aug_masks, multiclass_nms) - -logger = logging.getLogger(__name__) - -if sys.version_info >= (3, 7): - from mmdet.utils.contextmanagers import completed - - -class BBoxTestMixin(object): - - if sys.version_info >= (3, 7): - - async def async_test_bboxes(self, - x, - img_metas, - proposals, - rcnn_test_cfg, - rescale=False, - bbox_semaphore=None, - global_lock=None): - """Asynchronized test for box head without augmentation.""" - rois = bbox2roi(proposals) - roi_feats = self.bbox_roi_extractor( - x[:len(self.bbox_roi_extractor.featmap_strides)], rois) - if self.with_shared_head: - roi_feats = self.shared_head(roi_feats) - sleep_interval = rcnn_test_cfg.get('async_sleep_interval', 0.017) - - async with completed( - __name__, 'bbox_head_forward', - sleep_interval=sleep_interval): - cls_score, bbox_pred = self.bbox_head(roi_feats) - - img_shape = img_metas[0]['img_shape'] - scale_factor = img_metas[0]['scale_factor'] - det_bboxes, det_labels = self.bbox_head.get_bboxes( - rois, - cls_score, - bbox_pred, - img_shape, - scale_factor, - rescale=rescale, - cfg=rcnn_test_cfg) - return det_bboxes, det_labels - - def simple_test_bboxes(self, - x, - img_metas, - proposals, - rcnn_test_cfg, - rescale=False): - """Test only det bboxes without augmentation. - - Args: - x (tuple[Tensor]): Feature maps of all scale level. - img_metas (list[dict]): Image meta info. - proposals (Tensor or List[Tensor]): Region proposals. - rcnn_test_cfg (obj:`ConfigDict`): `test_cfg` of R-CNN. - rescale (bool): If True, return boxes in original image space. - Default: False. - - Returns: - tuple[list[Tensor], list[Tensor]]: The first list contains - the boxes of the corresponding image in a batch, each - tensor has the shape (num_boxes, 5) and last dimension - 5 represent (tl_x, tl_y, br_x, br_y, score). Each Tensor - in the second list is the labels with shape (num_boxes, ). - The length of both lists should be equal to batch_size. - """ - # get origin input shape to support onnx dynamic input shape - if torch.onnx.is_in_onnx_export(): - assert len( - img_metas - ) == 1, 'Only support one input image while in exporting to ONNX' - img_shapes = img_metas[0]['img_shape_for_onnx'] - else: - img_shapes = tuple(meta['img_shape'] for meta in img_metas) - scale_factors = tuple(meta['scale_factor'] for meta in img_metas) - - # The length of proposals of different batches may be different. - # In order to form a batch, a padding operation is required. - if isinstance(proposals, list): - # padding to form a batch - max_size = max([proposal.size(0) for proposal in proposals]) - for i, proposal in enumerate(proposals): - supplement = proposal.new_full( - (max_size - proposal.size(0), proposal.size(1)), 0) - proposals[i] = torch.cat((supplement, proposal), dim=0) - rois = torch.stack(proposals, dim=0) - else: - rois = proposals - - batch_index = torch.arange( - rois.size(0), device=rois.device).float().view(-1, 1, 1).expand( - rois.size(0), rois.size(1), 1) - rois = torch.cat([batch_index, rois[..., :4]], dim=-1) - batch_size = rois.shape[0] - num_proposals_per_img = rois.shape[1] - - # Eliminate the batch dimension - rois = rois.view(-1, 5) - bbox_results = self._bbox_forward(x, rois) - cls_score = bbox_results['cls_score'] - bbox_pred = bbox_results['bbox_pred'] - - # Recover the batch dimension - rois = rois.reshape(batch_size, num_proposals_per_img, -1) - cls_score = cls_score.reshape(batch_size, num_proposals_per_img, -1) - - if not torch.onnx.is_in_onnx_export(): - # remove padding - supplement_mask = rois[..., -1] == 0 - cls_score[supplement_mask, :] = 0 - - # bbox_pred would be None in some detector when with_reg is False, - # e.g. Grid R-CNN. - if bbox_pred is not None: - # the bbox prediction of some detectors like SABL is not Tensor - if isinstance(bbox_pred, torch.Tensor): - bbox_pred = bbox_pred.reshape(batch_size, - num_proposals_per_img, -1) - if not torch.onnx.is_in_onnx_export(): - bbox_pred[supplement_mask, :] = 0 - else: - # TODO: Looking forward to a better way - # For SABL - bbox_preds = self.bbox_head.bbox_pred_split( - bbox_pred, num_proposals_per_img) - # apply bbox post-processing to each image individually - det_bboxes = [] - det_labels = [] - for i in range(len(proposals)): - # remove padding - supplement_mask = proposals[i][..., -1] == 0 - for bbox in bbox_preds[i]: - bbox[supplement_mask] = 0 - det_bbox, det_label = self.bbox_head.get_bboxes( - rois[i], - cls_score[i], - bbox_preds[i], - img_shapes[i], - scale_factors[i], - rescale=rescale, - cfg=rcnn_test_cfg) - det_bboxes.append(det_bbox) - det_labels.append(det_label) - return det_bboxes, det_labels - else: - bbox_pred = None - - return self.bbox_head.get_bboxes( - rois, - cls_score, - bbox_pred, - img_shapes, - scale_factors, - rescale=rescale, - cfg=rcnn_test_cfg) - - def aug_test_bboxes(self, feats, img_metas, proposal_list, rcnn_test_cfg): - """Test det bboxes with test time augmentation.""" - aug_bboxes = [] - aug_scores = [] - for x, img_meta in zip(feats, img_metas): - # only one image in the batch - img_shape = img_meta[0]['img_shape'] - scale_factor = img_meta[0]['scale_factor'] - flip = img_meta[0]['flip'] - flip_direction = img_meta[0]['flip_direction'] - # TODO more flexible - proposals = bbox_mapping(proposal_list[0][:, :4], img_shape, - scale_factor, flip, flip_direction) - rois = bbox2roi([proposals]) - bbox_results = self._bbox_forward(x, rois) - bboxes, scores = self.bbox_head.get_bboxes( - rois, - bbox_results['cls_score'], - bbox_results['bbox_pred'], - img_shape, - scale_factor, - rescale=False, - cfg=None) - aug_bboxes.append(bboxes) - aug_scores.append(scores) - # after merging, bboxes will be rescaled to the original image size - merged_bboxes, merged_scores = merge_aug_bboxes( - aug_bboxes, aug_scores, img_metas, rcnn_test_cfg) - det_bboxes, det_labels = multiclass_nms(merged_bboxes, merged_scores, - rcnn_test_cfg.score_thr, - rcnn_test_cfg.nms, - rcnn_test_cfg.max_per_img) - return det_bboxes, det_labels - - -class MaskTestMixin(object): - - if sys.version_info >= (3, 7): - - async def async_test_mask(self, - x, - img_metas, - det_bboxes, - det_labels, - rescale=False, - mask_test_cfg=None): - """Asynchronized test for mask head without augmentation.""" - # image shape of the first image in the batch (only one) - ori_shape = img_metas[0]['ori_shape'] - scale_factor = img_metas[0]['scale_factor'] - if det_bboxes.shape[0] == 0: - segm_result = [[] for _ in range(self.mask_head.num_classes)] - else: - if rescale and not isinstance(scale_factor, - (float, torch.Tensor)): - scale_factor = det_bboxes.new_tensor(scale_factor) - _bboxes = ( - det_bboxes[:, :4] * - scale_factor if rescale else det_bboxes) - mask_rois = bbox2roi([_bboxes]) - mask_feats = self.mask_roi_extractor( - x[:len(self.mask_roi_extractor.featmap_strides)], - mask_rois) - - if self.with_shared_head: - mask_feats = self.shared_head(mask_feats) - if mask_test_cfg and mask_test_cfg.get('async_sleep_interval'): - sleep_interval = mask_test_cfg['async_sleep_interval'] - else: - sleep_interval = 0.035 - async with completed( - __name__, - 'mask_head_forward', - sleep_interval=sleep_interval): - mask_pred = self.mask_head(mask_feats) - segm_result = self.mask_head.get_seg_masks( - mask_pred, _bboxes, det_labels, self.test_cfg, ori_shape, - scale_factor, rescale) - return segm_result - - def simple_test_mask(self, - x, - img_metas, - det_bboxes, - det_labels, - rescale=False): - """Simple test for mask head without augmentation.""" - # image shapes of images in the batch - ori_shapes = tuple(meta['ori_shape'] for meta in img_metas) - scale_factors = tuple(meta['scale_factor'] for meta in img_metas) - - # The length of proposals of different batches may be different. - # In order to form a batch, a padding operation is required. - if isinstance(det_bboxes, list): - # padding to form a batch - max_size = max([bboxes.size(0) for bboxes in det_bboxes]) - for i, (bbox, label) in enumerate(zip(det_bboxes, det_labels)): - supplement_bbox = bbox.new_full( - (max_size - bbox.size(0), bbox.size(1)), 0) - supplement_label = label.new_full((max_size - label.size(0), ), - 0) - det_bboxes[i] = torch.cat((supplement_bbox, bbox), dim=0) - det_labels[i] = torch.cat((supplement_label, label), dim=0) - det_bboxes = torch.stack(det_bboxes, dim=0) - det_labels = torch.stack(det_labels, dim=0) - - batch_size = det_bboxes.size(0) - num_proposals_per_img = det_bboxes.shape[1] - - # if det_bboxes is rescaled to the original image size, we need to - # rescale it back to the testing scale to obtain RoIs. - det_bboxes = det_bboxes[..., :4] - if rescale: - if not isinstance(scale_factors[0], float): - scale_factors = det_bboxes.new_tensor(scale_factors) - det_bboxes = det_bboxes * scale_factors.unsqueeze(1) - - batch_index = torch.arange( - det_bboxes.size(0), device=det_bboxes.device).float().view( - -1, 1, 1).expand(det_bboxes.size(0), det_bboxes.size(1), 1) - mask_rois = torch.cat([batch_index, det_bboxes], dim=-1) - mask_rois = mask_rois.view(-1, 5) - mask_results = self._mask_forward(x, mask_rois) - mask_pred = mask_results['mask_pred'] - - # Recover the batch dimension - mask_preds = mask_pred.reshape(batch_size, num_proposals_per_img, - *mask_pred.shape[1:]) - - # apply mask post-processing to each image individually - segm_results = [] - for i in range(batch_size): - mask_pred = mask_preds[i] - det_bbox = det_bboxes[i] - det_label = det_labels[i] - - # remove padding - supplement_mask = det_bbox[..., -1] != 0 - mask_pred = mask_pred[supplement_mask] - det_bbox = det_bbox[supplement_mask] - det_label = det_label[supplement_mask] - - if det_label.shape[0] == 0: - segm_results.append([[] - for _ in range(self.mask_head.num_classes) - ]) - else: - segm_result = self.mask_head.get_seg_masks( - mask_pred, det_bbox, det_label, self.test_cfg, - ori_shapes[i], scale_factors[i], rescale) - segm_results.append(segm_result) - return segm_results - - def aug_test_mask(self, feats, img_metas, det_bboxes, det_labels): - """Test for mask head with test time augmentation.""" - if det_bboxes.shape[0] == 0: - segm_result = [[] for _ in range(self.mask_head.num_classes)] - else: - aug_masks = [] - for x, img_meta in zip(feats, img_metas): - img_shape = img_meta[0]['img_shape'] - scale_factor = img_meta[0]['scale_factor'] - flip = img_meta[0]['flip'] - flip_direction = img_meta[0]['flip_direction'] - _bboxes = bbox_mapping(det_bboxes[:, :4], img_shape, - scale_factor, flip, flip_direction) - mask_rois = bbox2roi([_bboxes]) - mask_results = self._mask_forward(x, mask_rois) - # convert to numpy array to save memory - aug_masks.append( - mask_results['mask_pred'].sigmoid().cpu().numpy()) - merged_masks = merge_aug_masks(aug_masks, img_metas, self.test_cfg) - - ori_shape = img_metas[0][0]['ori_shape'] - segm_result = self.mask_head.get_seg_masks( - merged_masks, - det_bboxes, - det_labels, - self.test_cfg, - ori_shape, - scale_factor=1.0, - rescale=False) - return segm_result diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmcv/cnn/bricks/transformer.py b/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmcv/cnn/bricks/transformer.py deleted file mode 100644 index e61ae0dd941a7be00b3e41a3de833ec50470a45f..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmcv/cnn/bricks/transformer.py +++ /dev/null @@ -1,595 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import copy -import warnings - -import torch -import torch.nn as nn - -from annotator.uniformer.mmcv import ConfigDict, deprecated_api_warning -from annotator.uniformer.mmcv.cnn import Linear, build_activation_layer, build_norm_layer -from annotator.uniformer.mmcv.runner.base_module import BaseModule, ModuleList, Sequential -from annotator.uniformer.mmcv.utils import build_from_cfg -from .drop import build_dropout -from .registry import (ATTENTION, FEEDFORWARD_NETWORK, POSITIONAL_ENCODING, - TRANSFORMER_LAYER, TRANSFORMER_LAYER_SEQUENCE) - -# Avoid BC-breaking of importing MultiScaleDeformableAttention from this file -try: - from annotator.uniformer.mmcv.ops.multi_scale_deform_attn import MultiScaleDeformableAttention # noqa F401 - warnings.warn( - ImportWarning( - '``MultiScaleDeformableAttention`` has been moved to ' - '``mmcv.ops.multi_scale_deform_attn``, please change original path ' # noqa E501 - '``from annotator.uniformer.mmcv.cnn.bricks.transformer import MultiScaleDeformableAttention`` ' # noqa E501 - 'to ``from annotator.uniformer.mmcv.ops.multi_scale_deform_attn import MultiScaleDeformableAttention`` ' # noqa E501 - )) - -except ImportError: - warnings.warn('Fail to import ``MultiScaleDeformableAttention`` from ' - '``mmcv.ops.multi_scale_deform_attn``, ' - 'You should install ``mmcv-full`` if you need this module. ') - - -def build_positional_encoding(cfg, default_args=None): - """Builder for Position Encoding.""" - return build_from_cfg(cfg, POSITIONAL_ENCODING, default_args) - - -def build_attention(cfg, default_args=None): - """Builder for attention.""" - return build_from_cfg(cfg, ATTENTION, default_args) - - -def build_feedforward_network(cfg, default_args=None): - """Builder for feed-forward network (FFN).""" - return build_from_cfg(cfg, FEEDFORWARD_NETWORK, default_args) - - -def build_transformer_layer(cfg, default_args=None): - """Builder for transformer layer.""" - return build_from_cfg(cfg, TRANSFORMER_LAYER, default_args) - - -def build_transformer_layer_sequence(cfg, default_args=None): - """Builder for transformer encoder and transformer decoder.""" - return build_from_cfg(cfg, TRANSFORMER_LAYER_SEQUENCE, default_args) - - -@ATTENTION.register_module() -class MultiheadAttention(BaseModule): - """A wrapper for ``torch.nn.MultiheadAttention``. - - This module implements MultiheadAttention with identity connection, - and positional encoding is also passed as input. - - Args: - embed_dims (int): The embedding dimension. - num_heads (int): Parallel attention heads. - attn_drop (float): A Dropout layer on attn_output_weights. - Default: 0.0. - proj_drop (float): A Dropout layer after `nn.MultiheadAttention`. - Default: 0.0. - dropout_layer (obj:`ConfigDict`): The dropout_layer used - when adding the shortcut. - init_cfg (obj:`mmcv.ConfigDict`): The Config for initialization. - Default: None. - batch_first (bool): When it is True, Key, Query and Value are shape of - (batch, n, embed_dim), otherwise (n, batch, embed_dim). - Default to False. - """ - - def __init__(self, - embed_dims, - num_heads, - attn_drop=0., - proj_drop=0., - dropout_layer=dict(type='Dropout', drop_prob=0.), - init_cfg=None, - batch_first=False, - **kwargs): - super(MultiheadAttention, self).__init__(init_cfg) - if 'dropout' in kwargs: - warnings.warn('The arguments `dropout` in MultiheadAttention ' - 'has been deprecated, now you can separately ' - 'set `attn_drop`(float), proj_drop(float), ' - 'and `dropout_layer`(dict) ') - attn_drop = kwargs['dropout'] - dropout_layer['drop_prob'] = kwargs.pop('dropout') - - self.embed_dims = embed_dims - self.num_heads = num_heads - self.batch_first = batch_first - - self.attn = nn.MultiheadAttention(embed_dims, num_heads, attn_drop, - **kwargs) - - self.proj_drop = nn.Dropout(proj_drop) - self.dropout_layer = build_dropout( - dropout_layer) if dropout_layer else nn.Identity() - - @deprecated_api_warning({'residual': 'identity'}, - cls_name='MultiheadAttention') - def forward(self, - query, - key=None, - value=None, - identity=None, - query_pos=None, - key_pos=None, - attn_mask=None, - key_padding_mask=None, - **kwargs): - """Forward function for `MultiheadAttention`. - - **kwargs allow passing a more general data flow when combining - with other operations in `transformerlayer`. - - Args: - query (Tensor): The input query with shape [num_queries, bs, - embed_dims] if self.batch_first is False, else - [bs, num_queries embed_dims]. - key (Tensor): The key tensor with shape [num_keys, bs, - embed_dims] if self.batch_first is False, else - [bs, num_keys, embed_dims] . - If None, the ``query`` will be used. Defaults to None. - value (Tensor): The value tensor with same shape as `key`. - Same in `nn.MultiheadAttention.forward`. Defaults to None. - If None, the `key` will be used. - identity (Tensor): This tensor, with the same shape as x, - will be used for the identity link. - If None, `x` will be used. Defaults to None. - query_pos (Tensor): The positional encoding for query, with - the same shape as `x`. If not None, it will - be added to `x` before forward function. Defaults to None. - key_pos (Tensor): The positional encoding for `key`, with the - same shape as `key`. Defaults to None. If not None, it will - be added to `key` before forward function. If None, and - `query_pos` has the same shape as `key`, then `query_pos` - will be used for `key_pos`. Defaults to None. - attn_mask (Tensor): ByteTensor mask with shape [num_queries, - num_keys]. Same in `nn.MultiheadAttention.forward`. - Defaults to None. - key_padding_mask (Tensor): ByteTensor with shape [bs, num_keys]. - Defaults to None. - - Returns: - Tensor: forwarded results with shape - [num_queries, bs, embed_dims] - if self.batch_first is False, else - [bs, num_queries embed_dims]. - """ - - if key is None: - key = query - if value is None: - value = key - if identity is None: - identity = query - if key_pos is None: - if query_pos is not None: - # use query_pos if key_pos is not available - if query_pos.shape == key.shape: - key_pos = query_pos - else: - warnings.warn(f'position encoding of key is' - f'missing in {self.__class__.__name__}.') - if query_pos is not None: - query = query + query_pos - if key_pos is not None: - key = key + key_pos - - # Because the dataflow('key', 'query', 'value') of - # ``torch.nn.MultiheadAttention`` is (num_query, batch, - # embed_dims), We should adjust the shape of dataflow from - # batch_first (batch, num_query, embed_dims) to num_query_first - # (num_query ,batch, embed_dims), and recover ``attn_output`` - # from num_query_first to batch_first. - if self.batch_first: - query = query.transpose(0, 1) - key = key.transpose(0, 1) - value = value.transpose(0, 1) - - out = self.attn( - query=query, - key=key, - value=value, - attn_mask=attn_mask, - key_padding_mask=key_padding_mask)[0] - - if self.batch_first: - out = out.transpose(0, 1) - - return identity + self.dropout_layer(self.proj_drop(out)) - - -@FEEDFORWARD_NETWORK.register_module() -class FFN(BaseModule): - """Implements feed-forward networks (FFNs) with identity connection. - - Args: - embed_dims (int): The feature dimension. Same as - `MultiheadAttention`. Defaults: 256. - feedforward_channels (int): The hidden dimension of FFNs. - Defaults: 1024. - num_fcs (int, optional): The number of fully-connected layers in - FFNs. Default: 2. - act_cfg (dict, optional): The activation config for FFNs. - Default: dict(type='ReLU') - ffn_drop (float, optional): Probability of an element to be - zeroed in FFN. Default 0.0. - add_identity (bool, optional): Whether to add the - identity connection. Default: `True`. - dropout_layer (obj:`ConfigDict`): The dropout_layer used - when adding the shortcut. - init_cfg (obj:`mmcv.ConfigDict`): The Config for initialization. - Default: None. - """ - - @deprecated_api_warning( - { - 'dropout': 'ffn_drop', - 'add_residual': 'add_identity' - }, - cls_name='FFN') - def __init__(self, - embed_dims=256, - feedforward_channels=1024, - num_fcs=2, - act_cfg=dict(type='ReLU', inplace=True), - ffn_drop=0., - dropout_layer=None, - add_identity=True, - init_cfg=None, - **kwargs): - super(FFN, self).__init__(init_cfg) - assert num_fcs >= 2, 'num_fcs should be no less ' \ - f'than 2. got {num_fcs}.' - self.embed_dims = embed_dims - self.feedforward_channels = feedforward_channels - self.num_fcs = num_fcs - self.act_cfg = act_cfg - self.activate = build_activation_layer(act_cfg) - - layers = [] - in_channels = embed_dims - for _ in range(num_fcs - 1): - layers.append( - Sequential( - Linear(in_channels, feedforward_channels), self.activate, - nn.Dropout(ffn_drop))) - in_channels = feedforward_channels - layers.append(Linear(feedforward_channels, embed_dims)) - layers.append(nn.Dropout(ffn_drop)) - self.layers = Sequential(*layers) - self.dropout_layer = build_dropout( - dropout_layer) if dropout_layer else torch.nn.Identity() - self.add_identity = add_identity - - @deprecated_api_warning({'residual': 'identity'}, cls_name='FFN') - def forward(self, x, identity=None): - """Forward function for `FFN`. - - The function would add x to the output tensor if residue is None. - """ - out = self.layers(x) - if not self.add_identity: - return self.dropout_layer(out) - if identity is None: - identity = x - return identity + self.dropout_layer(out) - - -@TRANSFORMER_LAYER.register_module() -class BaseTransformerLayer(BaseModule): - """Base `TransformerLayer` for vision transformer. - - It can be built from `mmcv.ConfigDict` and support more flexible - customization, for example, using any number of `FFN or LN ` and - use different kinds of `attention` by specifying a list of `ConfigDict` - named `attn_cfgs`. It is worth mentioning that it supports `prenorm` - when you specifying `norm` as the first element of `operation_order`. - More details about the `prenorm`: `On Layer Normalization in the - Transformer Architecture `_ . - - Args: - attn_cfgs (list[`mmcv.ConfigDict`] | obj:`mmcv.ConfigDict` | None )): - Configs for `self_attention` or `cross_attention` modules, - The order of the configs in the list should be consistent with - corresponding attentions in operation_order. - If it is a dict, all of the attention modules in operation_order - will be built with this config. Default: None. - ffn_cfgs (list[`mmcv.ConfigDict`] | obj:`mmcv.ConfigDict` | None )): - Configs for FFN, The order of the configs in the list should be - consistent with corresponding ffn in operation_order. - If it is a dict, all of the attention modules in operation_order - will be built with this config. - operation_order (tuple[str]): The execution order of operation - in transformer. Such as ('self_attn', 'norm', 'ffn', 'norm'). - Support `prenorm` when you specifying first element as `norm`. - Default:None. - norm_cfg (dict): Config dict for normalization layer. - Default: dict(type='LN'). - init_cfg (obj:`mmcv.ConfigDict`): The Config for initialization. - Default: None. - batch_first (bool): Key, Query and Value are shape - of (batch, n, embed_dim) - or (n, batch, embed_dim). Default to False. - """ - - def __init__(self, - attn_cfgs=None, - ffn_cfgs=dict( - type='FFN', - embed_dims=256, - feedforward_channels=1024, - num_fcs=2, - ffn_drop=0., - act_cfg=dict(type='ReLU', inplace=True), - ), - operation_order=None, - norm_cfg=dict(type='LN'), - init_cfg=None, - batch_first=False, - **kwargs): - - deprecated_args = dict( - feedforward_channels='feedforward_channels', - ffn_dropout='ffn_drop', - ffn_num_fcs='num_fcs') - for ori_name, new_name in deprecated_args.items(): - if ori_name in kwargs: - warnings.warn( - f'The arguments `{ori_name}` in BaseTransformerLayer ' - f'has been deprecated, now you should set `{new_name}` ' - f'and other FFN related arguments ' - f'to a dict named `ffn_cfgs`. ') - ffn_cfgs[new_name] = kwargs[ori_name] - - super(BaseTransformerLayer, self).__init__(init_cfg) - - self.batch_first = batch_first - - assert set(operation_order) & set( - ['self_attn', 'norm', 'ffn', 'cross_attn']) == \ - set(operation_order), f'The operation_order of' \ - f' {self.__class__.__name__} should ' \ - f'contains all four operation type ' \ - f"{['self_attn', 'norm', 'ffn', 'cross_attn']}" - - num_attn = operation_order.count('self_attn') + operation_order.count( - 'cross_attn') - if isinstance(attn_cfgs, dict): - attn_cfgs = [copy.deepcopy(attn_cfgs) for _ in range(num_attn)] - else: - assert num_attn == len(attn_cfgs), f'The length ' \ - f'of attn_cfg {num_attn} is ' \ - f'not consistent with the number of attention' \ - f'in operation_order {operation_order}.' - - self.num_attn = num_attn - self.operation_order = operation_order - self.norm_cfg = norm_cfg - self.pre_norm = operation_order[0] == 'norm' - self.attentions = ModuleList() - - index = 0 - for operation_name in operation_order: - if operation_name in ['self_attn', 'cross_attn']: - if 'batch_first' in attn_cfgs[index]: - assert self.batch_first == attn_cfgs[index]['batch_first'] - else: - attn_cfgs[index]['batch_first'] = self.batch_first - attention = build_attention(attn_cfgs[index]) - # Some custom attentions used as `self_attn` - # or `cross_attn` can have different behavior. - attention.operation_name = operation_name - self.attentions.append(attention) - index += 1 - - self.embed_dims = self.attentions[0].embed_dims - - self.ffns = ModuleList() - num_ffns = operation_order.count('ffn') - if isinstance(ffn_cfgs, dict): - ffn_cfgs = ConfigDict(ffn_cfgs) - if isinstance(ffn_cfgs, dict): - ffn_cfgs = [copy.deepcopy(ffn_cfgs) for _ in range(num_ffns)] - assert len(ffn_cfgs) == num_ffns - for ffn_index in range(num_ffns): - if 'embed_dims' not in ffn_cfgs[ffn_index]: - ffn_cfgs['embed_dims'] = self.embed_dims - else: - assert ffn_cfgs[ffn_index]['embed_dims'] == self.embed_dims - self.ffns.append( - build_feedforward_network(ffn_cfgs[ffn_index], - dict(type='FFN'))) - - self.norms = ModuleList() - num_norms = operation_order.count('norm') - for _ in range(num_norms): - self.norms.append(build_norm_layer(norm_cfg, self.embed_dims)[1]) - - def forward(self, - query, - key=None, - value=None, - query_pos=None, - key_pos=None, - attn_masks=None, - query_key_padding_mask=None, - key_padding_mask=None, - **kwargs): - """Forward function for `TransformerDecoderLayer`. - - **kwargs contains some specific arguments of attentions. - - Args: - query (Tensor): The input query with shape - [num_queries, bs, embed_dims] if - self.batch_first is False, else - [bs, num_queries embed_dims]. - key (Tensor): The key tensor with shape [num_keys, bs, - embed_dims] if self.batch_first is False, else - [bs, num_keys, embed_dims] . - value (Tensor): The value tensor with same shape as `key`. - query_pos (Tensor): The positional encoding for `query`. - Default: None. - key_pos (Tensor): The positional encoding for `key`. - Default: None. - attn_masks (List[Tensor] | None): 2D Tensor used in - calculation of corresponding attention. The length of - it should equal to the number of `attention` in - `operation_order`. Default: None. - query_key_padding_mask (Tensor): ByteTensor for `query`, with - shape [bs, num_queries]. Only used in `self_attn` layer. - Defaults to None. - key_padding_mask (Tensor): ByteTensor for `query`, with - shape [bs, num_keys]. Default: None. - - Returns: - Tensor: forwarded results with shape [num_queries, bs, embed_dims]. - """ - - norm_index = 0 - attn_index = 0 - ffn_index = 0 - identity = query - if attn_masks is None: - attn_masks = [None for _ in range(self.num_attn)] - elif isinstance(attn_masks, torch.Tensor): - attn_masks = [ - copy.deepcopy(attn_masks) for _ in range(self.num_attn) - ] - warnings.warn(f'Use same attn_mask in all attentions in ' - f'{self.__class__.__name__} ') - else: - assert len(attn_masks) == self.num_attn, f'The length of ' \ - f'attn_masks {len(attn_masks)} must be equal ' \ - f'to the number of attention in ' \ - f'operation_order {self.num_attn}' - - for layer in self.operation_order: - if layer == 'self_attn': - temp_key = temp_value = query - query = self.attentions[attn_index]( - query, - temp_key, - temp_value, - identity if self.pre_norm else None, - query_pos=query_pos, - key_pos=query_pos, - attn_mask=attn_masks[attn_index], - key_padding_mask=query_key_padding_mask, - **kwargs) - attn_index += 1 - identity = query - - elif layer == 'norm': - query = self.norms[norm_index](query) - norm_index += 1 - - elif layer == 'cross_attn': - query = self.attentions[attn_index]( - query, - key, - value, - identity if self.pre_norm else None, - query_pos=query_pos, - key_pos=key_pos, - attn_mask=attn_masks[attn_index], - key_padding_mask=key_padding_mask, - **kwargs) - attn_index += 1 - identity = query - - elif layer == 'ffn': - query = self.ffns[ffn_index]( - query, identity if self.pre_norm else None) - ffn_index += 1 - - return query - - -@TRANSFORMER_LAYER_SEQUENCE.register_module() -class TransformerLayerSequence(BaseModule): - """Base class for TransformerEncoder and TransformerDecoder in vision - transformer. - - As base-class of Encoder and Decoder in vision transformer. - Support customization such as specifying different kind - of `transformer_layer` in `transformer_coder`. - - Args: - transformerlayer (list[obj:`mmcv.ConfigDict`] | - obj:`mmcv.ConfigDict`): Config of transformerlayer - in TransformerCoder. If it is obj:`mmcv.ConfigDict`, - it would be repeated `num_layer` times to a - list[`mmcv.ConfigDict`]. Default: None. - num_layers (int): The number of `TransformerLayer`. Default: None. - init_cfg (obj:`mmcv.ConfigDict`): The Config for initialization. - Default: None. - """ - - def __init__(self, transformerlayers=None, num_layers=None, init_cfg=None): - super(TransformerLayerSequence, self).__init__(init_cfg) - if isinstance(transformerlayers, dict): - transformerlayers = [ - copy.deepcopy(transformerlayers) for _ in range(num_layers) - ] - else: - assert isinstance(transformerlayers, list) and \ - len(transformerlayers) == num_layers - self.num_layers = num_layers - self.layers = ModuleList() - for i in range(num_layers): - self.layers.append(build_transformer_layer(transformerlayers[i])) - self.embed_dims = self.layers[0].embed_dims - self.pre_norm = self.layers[0].pre_norm - - def forward(self, - query, - key, - value, - query_pos=None, - key_pos=None, - attn_masks=None, - query_key_padding_mask=None, - key_padding_mask=None, - **kwargs): - """Forward function for `TransformerCoder`. - - Args: - query (Tensor): Input query with shape - `(num_queries, bs, embed_dims)`. - key (Tensor): The key tensor with shape - `(num_keys, bs, embed_dims)`. - value (Tensor): The value tensor with shape - `(num_keys, bs, embed_dims)`. - query_pos (Tensor): The positional encoding for `query`. - Default: None. - key_pos (Tensor): The positional encoding for `key`. - Default: None. - attn_masks (List[Tensor], optional): Each element is 2D Tensor - which is used in calculation of corresponding attention in - operation_order. Default: None. - query_key_padding_mask (Tensor): ByteTensor for `query`, with - shape [bs, num_queries]. Only used in self-attention - Default: None. - key_padding_mask (Tensor): ByteTensor for `query`, with - shape [bs, num_keys]. Default: None. - - Returns: - Tensor: results with shape [num_queries, bs, embed_dims]. - """ - for layer in self.layers: - query = layer( - query, - key, - value, - query_pos=query_pos, - key_pos=key_pos, - attn_masks=attn_masks, - query_key_padding_mask=query_key_padding_mask, - key_padding_mask=key_padding_mask, - **kwargs) - return query diff --git a/spaces/abidlabs/vision-transformer/app.py b/spaces/abidlabs/vision-transformer/app.py deleted file mode 100644 index 94b1678db7040c5aad0f2575e3d535f8a2b2cac3..0000000000000000000000000000000000000000 --- a/spaces/abidlabs/vision-transformer/app.py +++ /dev/null @@ -1,8 +0,0 @@ -import gradio as gr - -gr.Interface.load( - "huggingface/google/vit-base-patch16-224", - theme="default", - examples=[["alligator.jpg"], ["laptop.jpg"]], - css=".footer{display:none !important}", - title=None).launch() \ No newline at end of file diff --git a/spaces/abrar-lohia/text-2-character-anim/pyrender/.eggs/pyglet-2.0.5-py3.10.egg/pyglet/gl/base.py b/spaces/abrar-lohia/text-2-character-anim/pyrender/.eggs/pyglet-2.0.5-py3.10.egg/pyglet/gl/base.py deleted file mode 100644 index 3771010ae6f13c8bfe807a276a43ccdab329402d..0000000000000000000000000000000000000000 --- a/spaces/abrar-lohia/text-2-character-anim/pyrender/.eggs/pyglet-2.0.5-py3.10.egg/pyglet/gl/base.py +++ /dev/null @@ -1,383 +0,0 @@ -from enum import Enum - -from pyglet import gl -from pyglet.gl import gl_info - - -class OpenGLAPI(Enum): - OPENGL = 1 - OPENGL_ES = 2 - - -class Config: - """Graphics configuration. - - A Config stores the preferences for OpenGL attributes such as the - number of auxiliary buffers, size of the colour and depth buffers, - double buffering, stencilling, multi- and super-sampling, and so on. - - Different platforms support a different set of attributes, so these - are set with a string key and a value which is integer or boolean. - - :Ivariables: - `double_buffer` : bool - Specify the presence of a back-buffer for every color buffer. - `stereo` : bool - Specify the presence of separate left and right buffer sets. - `buffer_size` : int - Total bits per sample per color buffer. - `aux_buffers` : int - The number of auxiliary color buffers. - `sample_buffers` : int - The number of multisample buffers. - `samples` : int - The number of samples per pixel, or 0 if there are no multisample - buffers. - `red_size` : int - Bits per sample per buffer devoted to the red component. - `green_size` : int - Bits per sample per buffer devoted to the green component. - `blue_size` : int - Bits per sample per buffer devoted to the blue component. - `alpha_size` : int - Bits per sample per buffer devoted to the alpha component. - `depth_size` : int - Bits per sample in the depth buffer. - `stencil_size` : int - Bits per sample in the stencil buffer. - `accum_red_size` : int - Bits per pixel devoted to the red component in the accumulation - buffer. - `accum_green_size` : int - Bits per pixel devoted to the green component in the accumulation - buffer. - `accum_blue_size` : int - Bits per pixel devoted to the blue component in the accumulation - buffer. - `accum_alpha_size` : int - Bits per pixel devoted to the alpha component in the accumulation - buffer. - """ - - _attribute_names = [ - 'double_buffer', - 'stereo', - 'buffer_size', - 'aux_buffers', - 'sample_buffers', - 'samples', - 'red_size', - 'green_size', - 'blue_size', - 'alpha_size', - 'depth_size', - 'stencil_size', - 'accum_red_size', - 'accum_green_size', - 'accum_blue_size', - 'accum_alpha_size', - 'major_version', - 'minor_version', - 'forward_compatible', - 'opengl_api', - 'debug' - ] - - major_version = None - minor_version = None - forward_compatible = None - opengl_api = None - debug = None - - def __init__(self, **kwargs): - """Create a template config with the given attributes. - - Specify attributes as keyword arguments, for example:: - - template = Config(double_buffer=True) - - """ - for name in self._attribute_names: - if name in kwargs: - setattr(self, name, kwargs[name]) - else: - setattr(self, name, None) - - self.opengl_api = self.opengl_api or "gl" - - def get_gl_attributes(self): - """Return a list of attributes set on this config. - - :rtype: list of tuple (name, value) - :return: All attributes, with unset attributes having a value of - ``None``. - """ - return [(name, getattr(self, name)) for name in self._attribute_names] - - def match(self, canvas): - """Return a list of matching complete configs for the given canvas. - - .. versionadded:: 1.2 - - :Parameters: - `canvas` : `Canvas` - Display to host contexts created from the config. - - :rtype: list of `CanvasConfig` - """ - raise NotImplementedError('abstract') - - def create_context(self, share): - """Create a GL context that satisifies this configuration. - - :deprecated: Use `CanvasConfig.create_context`. - - :Parameters: - `share` : `Context` - If not None, a context with which to share objects with. - - :rtype: `Context` - :return: The new context. - """ - raise gl.ConfigException('This config cannot be used to create contexts. ' - 'Use Config.match to created a CanvasConfig') - - def is_complete(self): - """Determine if this config is complete and able to create a context. - - Configs created directly are not complete, they can only serve - as templates for retrieving a supported config from the system. - For example, `pyglet.window.Screen.get_matching_configs` returns - complete configs. - - :deprecated: Use ``isinstance(config, CanvasConfig)``. - - :rtype: bool - :return: True if the config is complete and can create a context. - """ - return isinstance(self, CanvasConfig) - - def __repr__(self): - return f"{self.__class__.__name__}({self.get_gl_attributes()})" - - -class CanvasConfig(Config): - """OpenGL configuration for a particular canvas. - - Use `Config.match` to obtain an instance of this class. - - .. versionadded:: 1.2 - - :Ivariables: - `canvas` : `Canvas` - The canvas this config is valid on. - - """ - - def __init__(self, canvas, base_config): - self.canvas = canvas - - self.major_version = base_config.major_version - self.minor_version = base_config.minor_version - self.forward_compatible = base_config.forward_compatible - self.opengl_api = base_config.opengl_api or self.opengl_api - self.debug = base_config.debug - - def compatible(self, canvas): - raise NotImplementedError('abstract') - - def create_context(self, share): - """Create a GL context that satisifies this configuration. - - :Parameters: - `share` : `Context` - If not None, a context with which to share objects with. - - :rtype: `Context` - :return: The new context. - """ - raise NotImplementedError('abstract') - - def is_complete(self): - return True - - -class ObjectSpace: - def __init__(self): - # Textures and buffers scheduled for deletion - # the next time this object space is active. - self.doomed_textures = [] - self.doomed_buffers = [] - self.doomed_vaos = [] - self.doomed_shader_programs = [] - - -class Context: - """OpenGL context for drawing. - - Use `CanvasConfig.create_context` to create a context. - - :Ivariables: - `object_space` : `ObjectSpace` - An object which is shared between all contexts that share - GL objects. - - """ - # gl_info.GLInfo instance, filled in on first set_current - _info = None - - def __init__(self, config, context_share=None): - self.config = config - self.context_share = context_share - self.canvas = None - - if context_share: - self.object_space = context_share.object_space - else: - self.object_space = ObjectSpace() - - def __repr__(self): - return f"{self.__class__.__name__}(id={id(self)}, share={self.context_share})" - - def attach(self, canvas): - if self.canvas is not None: - self.detach() - if not self.config.compatible(canvas): - raise RuntimeError(f'Cannot attach {canvas} to {self}') - self.canvas = canvas - - def detach(self): - self.canvas = None - - def set_current(self): - if not self.canvas: - raise RuntimeError('Canvas has not been attached') - - # XXX not per-thread - gl.current_context = self - - # XXX - gl_info.set_active_context() - - if not self._info: - self._info = gl_info.GLInfo() - self._info.set_active_context() - - # Release Textures, Buffers, and VAOs on this context scheduled for - # deletion. Note that the garbage collector may introduce a race - # condition, so operate on a copy, and clear the list afterwards. - if self.object_space.doomed_textures: - textures = self.object_space.doomed_textures[:] - textures = (gl.GLuint * len(textures))(*textures) - gl.glDeleteTextures(len(textures), textures) - self.object_space.doomed_textures.clear() - if self.object_space.doomed_buffers: - buffers = self.object_space.doomed_buffers[:] - buffers = (gl.GLuint * len(buffers))(*buffers) - gl.glDeleteBuffers(len(buffers), buffers) - self.object_space.doomed_buffers.clear() - if self.object_space.doomed_vaos: - vaos = self.object_space.doomed_vaos[:] - vaos = (gl.GLuint * len(vaos))(*vaos) - gl.glDeleteVertexArrays(len(vaos), vaos) - self.object_space.doomed_vaos.clear() - if self.object_space.doomed_shader_programs: - for program_id in self.object_space.doomed_shader_programs: - gl.glDeleteProgram(program_id) - self.object_space.doomed_shader_programs.clear() - - def destroy(self): - """Release the context. - - The context will not be useable after being destroyed. Each platform - has its own convention for releasing the context and the buffer(s) - that depend on it in the correct order; this should never be called - by an application. - """ - self.detach() - - if gl.current_context is self: - gl.current_context = None - gl_info.remove_active_context() - - # Switch back to shadow context. - if gl._shadow_window is not None: - gl._shadow_window.switch_to() - - def delete_texture(self, texture_id): - """Safely delete a Texture belonging to this context. - - Usually, the Texture is released immediately using - ``glDeleteTextures``, however if another context that does not share - this context's object space is currently active, the deletion will - be deferred until an appropriate context is activated. - - :Parameters: - `texture_id` : int - The OpenGL name of the Texture to delete. - - """ - if self.object_space is gl.current_context.object_space: - gl.glDeleteTextures(1, gl.GLuint(texture_id)) - else: - self.object_space.doomed_textures.append(texture_id) - - def delete_buffer(self, buffer_id): - """Safely delete a Buffer object belonging to this context. - - This method behaves similarly to `delete_texture`, though for - ``glDeleteBuffers`` instead of ``glDeleteTextures``. - - :Parameters: - `buffer_id` : int - The OpenGL name of the buffer to delete. - - .. versionadded:: 1.1 - """ - if self.object_space is gl.current_context.object_space and False: - gl.glDeleteBuffers(1, gl.GLuint(buffer_id)) - else: - self.object_space.doomed_buffers.append(buffer_id) - - def delete_vao(self, vao_id): - """Safely delete a Vertex Array Object belonging to this context. - - This method behaves similarly to `delete_texture`, though for - ``glDeleteVertexArrays`` instead of ``glDeleteTextures``. - - :Parameters: - `vao_id` : int - The OpenGL name of the Vertex Array to delete. - - .. versionadded:: 2.0 - """ - if gl.current_context and self.object_space is gl.current_context.object_space and False: - gl.glDeleteVertexArrays(1, gl.GLuint(vao_id)) - else: - self.object_space.doomed_vaos.append(vao_id) - - def delete_shader_program(self, program_id): - """Safely delete a Shader Program belonging to this context. - - This method behaves similarly to `delete_texture`, though for - ``glDeleteProgram`` instead of ``glDeleteTextures``. - - :Parameters: - `program_id` : int - The OpenGL name of the Shader Program to delete. - - .. versionadded:: 2.0 - """ - if gl.current_context is self: - gl.glDeleteProgram(program_id) - else: - self.object_space.doomed_shader_programs.append(program_id) - - def get_info(self): - """Get the OpenGL information for this context. - - .. versionadded:: 1.2 - - :rtype: `GLInfo` - """ - return self._info diff --git a/spaces/ahdsoft/persian-keyphrase-extraction/app.py b/spaces/ahdsoft/persian-keyphrase-extraction/app.py deleted file mode 100644 index d9db6070aab17a9a28a4b7d3ea5ce2add3c574ff..0000000000000000000000000000000000000000 --- a/spaces/ahdsoft/persian-keyphrase-extraction/app.py +++ /dev/null @@ -1,202 +0,0 @@ -import streamlit as st -import numpy as np -from pandas import DataFrame -# from keybert import KeyBERT -# For Flair (Keybert) -# from flair.embeddings import TransformerDocumentEmbeddings -import seaborn as sns -# For download buttons -from functionforDownloadButtons import download_button -import os -import json - -from kpe_ranker import KpeRanker - -st.set_page_config( - page_title="استخراج عبارات کلیدی عهد", - page_icon="🎈", -) - - -def _max_width_(): - max_width_str = f"max-width: 1400px;" - st.markdown( - f""" - - """, - unsafe_allow_html=True, - ) - - -_max_width_() - -c30, c31, c32 = st.columns([2.5, 1, 3]) - -with c30: - # st.image("logo.png", width=400) - st.title("🔑 استخراج عبارات کلیدی") - st.header("") - - - -with st.expander("ℹ️ - About this app", expanded=True): - - st.write( - """ -- استخراج عبارات کلیدی، محصولی نوین از شرکت عهد است که در ارزیابی‌های صورت‌گرفته، دقت بیشتری را نسبت به رقبا از خود نشان داده است. - """ - ) - - st.markdown("") - -st.markdown("") -# st.markdown("## **...**") -with st.form(key="my_form"): - - - ce, c1, ce, c2, c3 = st.columns([0.07, 1, 0.07, 5, 0.07]) - with c1: - - - # if ModelType == "Default (DistilBERT)": - # kw_model = KeyBERT(model=roberta) - - @st.cache_resource - def load_model(): - return KpeRanker() - - kpe_ranker_extractor = load_model() - - # else: - # @st.cache(allow_output_mutation=True) - # def load_model(): - # return KeyBERT("distilbert-base-nli-mean-tokens") - - # kw_model = load_model() - - top_N = st.slider( - "# تعداد", - min_value=1, - max_value=30, - value=10, - help="You can choose the number of keywords/keyphrases to display. Between 1 and 30, default number is 10.", - ) -# min_Ngrams = st.number_input( -# "Minimum Ngram", -# min_value=1, -# max_value=4, -# help="""The minimum value for the ngram range. - -# *Keyphrase_ngram_range* sets the length of the resulting keywords/keyphrases. - -# To extract keyphrases, simply set *keyphrase_ngram_range* to (1, 2) or higher depending on the number of words you would like in the resulting keyphrases.""", -# # help="Minimum value for the keyphrase_ngram_range. keyphrase_ngram_range sets the length of the resulting keywords/keyphrases. To extract keyphrases, simply set keyphrase_ngram_range to (1, # 2) or higher depending on the number of words you would like in the resulting keyphrases.", -# ) - -# max_Ngrams = st.number_input( -# "Maximum Ngram", -# value=2, -# min_value=1, -# max_value=4, -# help="""The maximum value for the keyphrase_ngram_range. - -# *Keyphrase_ngram_range* sets the length of the resulting keywords/keyphrases. - -# To extract keyphrases, simply set *keyphrase_ngram_range* to (1, 2) or higher depending on the number of words you would like in the resulting keyphrases.""", -# ) - -# StopWordsCheckbox = st.checkbox( -# "Remove stop words", -# help="Tick this box to remove stop words from the document (currently English only)", -# ) - - use_ner = st.checkbox( - "NER", - value=True, - help="استفاده از شناسایی موجودیت‌های نام‌دار" ) - - - with c2: - doc = st.text_area( - "متن خود را وارد کنید", - height=510, - ) - - MAX_WORDS = 500 - import re - res = len(re.findall(r"\w+", doc)) - if res > MAX_WORDS: - st.warning( - "⚠️ Your text contains " - + str(res) - + " words." - + " Only the first 500 words will be reviewed. Stay tuned as increased allowance is coming! 😊" - ) - - doc = doc[:MAX_WORDS] - - submit_button = st.form_submit_button(label="✨ پردازش") - - -if not submit_button: - st.stop() - - - - - - - - - -#################################### get keyphrases ####################################################### - -keywords = kpe_ranker_extractor.extract(text=doc, count=top_N, using_ner=use_ner, return_sorted=True) -# print(keywords) -st.markdown("## **🎈 Check & download results **") - -st.header("") - -cs, c1, c2, c3, cLast = st.columns([2, 1.5, 1.5, 1.5, 2]) - -with c1: - CSVButton2 = download_button(keywords, "Data.csv", "📥 Download (.csv)") -with c2: - CSVButton2 = download_button(keywords, "Data.txt", "📥 Download (.txt)") -with c3: - CSVButton2 = download_button(keywords, "Data.json", "📥 Download (.json)") - -st.header("") - -df = ( - DataFrame(keywords, columns=["Keyword/Keyphrase", "Relevancy"]) - .sort_values(by="Relevancy", ascending=False) - .reset_index(drop=True) -) - -df.index += 1 - -# Add styling -cmGreen = sns.light_palette("green", as_cmap=True) -cmRed = sns.light_palette("red", as_cmap=True) -df = df.style.background_gradient( - cmap=cmGreen, - subset=[ - "Relevancy", - ], -) - -c1, c2, c3 = st.columns([1, 3, 1]) - -format_dictionary = { - "Relevancy": "{:.1%}", -} - -df = df.format(format_dictionary) - -with c2: - st.table(df) diff --git a/spaces/aijack/jojo/op/fused_act.py b/spaces/aijack/jojo/op/fused_act.py deleted file mode 100644 index 8459d510d7b79684779dfe47f5b46d81c94b4a4d..0000000000000000000000000000000000000000 --- a/spaces/aijack/jojo/op/fused_act.py +++ /dev/null @@ -1,86 +0,0 @@ -import os - -import torch -from torch import nn -from torch.autograd import Function -from torch.utils.cpp_extension import load - - -module_path = os.path.dirname(__file__) -fused = load( - 'fused', - sources=[ - os.path.join(module_path, 'fused_bias_act.cpp'), - os.path.join(module_path, 'fused_bias_act_kernel.cu'), - ], -) - - -class FusedLeakyReLUFunctionBackward(Function): - @staticmethod - def forward(ctx, grad_output, out, negative_slope, scale): - ctx.save_for_backward(out) - ctx.negative_slope = negative_slope - ctx.scale = scale - - empty = grad_output.new_empty(0) - - grad_input = fused.fused_bias_act( - grad_output, empty, out, 3, 1, negative_slope, scale - ) - - dim = [0] - - if grad_input.ndim > 2: - dim += list(range(2, grad_input.ndim)) - - grad_bias = grad_input.sum(dim).detach() - - return grad_input, grad_bias - - @staticmethod - def backward(ctx, gradgrad_input, gradgrad_bias): - out, = ctx.saved_tensors - gradgrad_out = fused.fused_bias_act( - gradgrad_input, gradgrad_bias, out, 3, 1, ctx.negative_slope, ctx.scale - ) - - return gradgrad_out, None, None, None - - -class FusedLeakyReLUFunction(Function): - @staticmethod - def forward(ctx, input, bias, negative_slope, scale): - empty = input.new_empty(0) - out = fused.fused_bias_act(input, bias, empty, 3, 0, negative_slope, scale) - ctx.save_for_backward(out) - ctx.negative_slope = negative_slope - ctx.scale = scale - - return out - - @staticmethod - def backward(ctx, grad_output): - out, = ctx.saved_tensors - - grad_input, grad_bias = FusedLeakyReLUFunctionBackward.apply( - grad_output, out, ctx.negative_slope, ctx.scale - ) - - return grad_input, grad_bias, None, None - - -class FusedLeakyReLU(nn.Module): - def __init__(self, channel, negative_slope=0.2, scale=2 ** 0.5): - super().__init__() - - self.bias = nn.Parameter(torch.zeros(channel)) - self.negative_slope = negative_slope - self.scale = scale - - def forward(self, input): - return fused_leaky_relu(input, self.bias, self.negative_slope, self.scale) - - -def fused_leaky_relu(input, bias, negative_slope=0.2, scale=2 ** 0.5): - return FusedLeakyReLUFunction.apply(input, bias, negative_slope, scale) diff --git a/spaces/akashdhiman79830/MyGenAIAvatar/app.py b/spaces/akashdhiman79830/MyGenAIAvatar/app.py deleted file mode 100644 index a362dcc7d0ddd1eee86961f1bc3db6d894fbd3d5..0000000000000000000000000000000000000000 --- a/spaces/akashdhiman79830/MyGenAIAvatar/app.py +++ /dev/null @@ -1,34 +0,0 @@ -import os -import gradio as gr -from langchain.chat_models import ChatOpenAI -from langchain import LLMChain, PromptTemplate -from langchain.memory import ConversationBufferMemory - -OPENAI_API_KEY=os.getenv('OPENAI_API_KEY') - -template = """You are a helpful assistant to answer all user queries. -{chat_history} -User: {user_message} -Chatbot:""" - -prompt = PromptTemplate( - input_variables=["chat_history", "user_message"], template=template -) - -memory = ConversationBufferMemory(memory_key="chat_history") - -llm_chain = LLMChain( - llm=ChatOpenAI(temperature='0.5', model_name="gpt-3.5-turbo"), - prompt=prompt, - verbose=True, - memory=memory, -) - -def get_text_response(user_message,history): - response = llm_chain.predict(user_message = user_message) - return response - -demo = gr.ChatInterface(get_text_response) - -if __name__ == "__main__": - demo.launch() #To create a public link, set `share=True` in `launch()`. To enable errors and logs, set `debug=True` in `launch()`. diff --git a/spaces/akhaliq/Detectron2/README.md b/spaces/akhaliq/Detectron2/README.md deleted file mode 100644 index f1732738aa3b25c2c7aaf1bca5b8c87f76ea82d7..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/Detectron2/README.md +++ /dev/null @@ -1,37 +0,0 @@ ---- -title: Detectron2 -emoji: 📉 -colorFrom: green -colorTo: red -sdk: gradio -app_file: app.py -pinned: false ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio` or `streamlit` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code). -Path is relative to the root of the repository. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/akhaliq/SummerTime/model/single_doc/base_single_doc_model.py b/spaces/akhaliq/SummerTime/model/single_doc/base_single_doc_model.py deleted file mode 100644 index 079700afaa3a270bf2424a0bb75a71cccc861a10..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/SummerTime/model/single_doc/base_single_doc_model.py +++ /dev/null @@ -1,36 +0,0 @@ -from model.base_model import SummModel - - -class SingleDocSummModel(SummModel): - def __init__( - self, - trained_domain: str = None, - max_input_length: int = None, - max_output_length: int = None, - ): - super(SingleDocSummModel, self).__init__( - trained_domain=trained_domain, - max_input_length=max_input_length, - max_output_length=max_output_length, - ) - - @classmethod - def assert_summ_input_type(cls, corpus, query): - if not isinstance(corpus, list): - raise TypeError( - "Single-document summarization requires corpus of `List[str]`." - ) - if not all([isinstance(ins, str) for ins in corpus]): - raise TypeError( - "Single-document summarization requires corpus of `List[str]`." - ) - - if query is not None: - if not isinstance(query, list): - raise TypeError( - "Query-based single-document summarization requires query of `List[str]`." - ) - if not all([isinstance(q, str) for q in query]): - raise TypeError( - "Query-based single-document summarization requires query of `List[str]`." - ) diff --git a/spaces/akhaliq/demucs/app.py b/spaces/akhaliq/demucs/app.py deleted file mode 100644 index 67b0ad0943e927f88e28ebb1cef9bc0794d68250..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/demucs/app.py +++ /dev/null @@ -1,26 +0,0 @@ -import os -import gradio as gr -from scipy.io.wavfile import write - - -def inference(audio): - os.makedirs("out", exist_ok=True) - write('test.wav', audio[0], audio[1]) - os.system("python3 -m demucs.separate -n mdx_extra_q -d cpu test.wav -o out") - return "./out/mdx_extra_q/test/vocals.wav","./out/mdx_extra_q/test/bass.wav",\ -"./out/mdx_extra_q/test/drums.wav","./out/mdx_extra_q/test/other.wav" - -title = "Demucs" -description = "Gradio demo for Demucs: Music Source Separation in the Waveform Domain. To use it, simply upload your audio, or click one of the examples to load them. Read more at the links below." -article = "

Music Source Separation in the Waveform Domain | Github Repo

" - -examples=[['test.mp3']] -gr.Interface( - inference, - gr.inputs.Audio(type="numpy", label="Input"), - [gr.outputs.Audio(type="filepath", label="Vocals"),gr.outputs.Audio(type="filepath", label="Bass"),gr.outputs.Audio(type="filepath", label="Drums"),gr.outputs.Audio(type="filepath", label="Other")], - title=title, - description=description, - article=article, - examples=examples - ).launch(enable_queue=True) \ No newline at end of file diff --git a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/pygments/formatters/rtf.py b/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/pygments/formatters/rtf.py deleted file mode 100644 index b4b0acab9b5b1b397b712b197d6aee6b3c69ed54..0000000000000000000000000000000000000000 --- a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/pygments/formatters/rtf.py +++ /dev/null @@ -1,146 +0,0 @@ -""" - pygments.formatters.rtf - ~~~~~~~~~~~~~~~~~~~~~~~ - - A formatter that generates RTF files. - - :copyright: Copyright 2006-2021 by the Pygments team, see AUTHORS. - :license: BSD, see LICENSE for details. -""" - -from pip._vendor.pygments.formatter import Formatter -from pip._vendor.pygments.util import get_int_opt, surrogatepair - - -__all__ = ['RtfFormatter'] - - -class RtfFormatter(Formatter): - """ - Format tokens as RTF markup. This formatter automatically outputs full RTF - documents with color information and other useful stuff. Perfect for Copy and - Paste into Microsoft(R) Word(R) documents. - - Please note that ``encoding`` and ``outencoding`` options are ignored. - The RTF format is ASCII natively, but handles unicode characters correctly - thanks to escape sequences. - - .. versionadded:: 0.6 - - Additional options accepted: - - `style` - The style to use, can be a string or a Style subclass (default: - ``'default'``). - - `fontface` - The used font family, for example ``Bitstream Vera Sans``. Defaults to - some generic font which is supposed to have fixed width. - - `fontsize` - Size of the font used. Size is specified in half points. The - default is 24 half-points, giving a size 12 font. - - .. versionadded:: 2.0 - """ - name = 'RTF' - aliases = ['rtf'] - filenames = ['*.rtf'] - - def __init__(self, **options): - r""" - Additional options accepted: - - ``fontface`` - Name of the font used. Could for example be ``'Courier New'`` - to further specify the default which is ``'\fmodern'``. The RTF - specification claims that ``\fmodern`` are "Fixed-pitch serif - and sans serif fonts". Hope every RTF implementation thinks - the same about modern... - - """ - Formatter.__init__(self, **options) - self.fontface = options.get('fontface') or '' - self.fontsize = get_int_opt(options, 'fontsize', 0) - - def _escape(self, text): - return text.replace('\\', '\\\\') \ - .replace('{', '\\{') \ - .replace('}', '\\}') - - def _escape_text(self, text): - # empty strings, should give a small performance improvement - if not text: - return '' - - # escape text - text = self._escape(text) - - buf = [] - for c in text: - cn = ord(c) - if cn < (2**7): - # ASCII character - buf.append(str(c)) - elif (2**7) <= cn < (2**16): - # single unicode escape sequence - buf.append('{\\u%d}' % cn) - elif (2**16) <= cn: - # RTF limits unicode to 16 bits. - # Force surrogate pairs - buf.append('{\\u%d}{\\u%d}' % surrogatepair(cn)) - - return ''.join(buf).replace('\n', '\\par\n') - - def format_unencoded(self, tokensource, outfile): - # rtf 1.8 header - outfile.write('{\\rtf1\\ansi\\uc0\\deff0' - '{\\fonttbl{\\f0\\fmodern\\fprq1\\fcharset0%s;}}' - '{\\colortbl;' % (self.fontface and - ' ' + self._escape(self.fontface) or - '')) - - # convert colors and save them in a mapping to access them later. - color_mapping = {} - offset = 1 - for _, style in self.style: - for color in style['color'], style['bgcolor'], style['border']: - if color and color not in color_mapping: - color_mapping[color] = offset - outfile.write('\\red%d\\green%d\\blue%d;' % ( - int(color[0:2], 16), - int(color[2:4], 16), - int(color[4:6], 16) - )) - offset += 1 - outfile.write('}\\f0 ') - if self.fontsize: - outfile.write('\\fs%d' % self.fontsize) - - # highlight stream - for ttype, value in tokensource: - while not self.style.styles_token(ttype) and ttype.parent: - ttype = ttype.parent - style = self.style.style_for_token(ttype) - buf = [] - if style['bgcolor']: - buf.append('\\cb%d' % color_mapping[style['bgcolor']]) - if style['color']: - buf.append('\\cf%d' % color_mapping[style['color']]) - if style['bold']: - buf.append('\\b') - if style['italic']: - buf.append('\\i') - if style['underline']: - buf.append('\\ul') - if style['border']: - buf.append('\\chbrdr\\chcfpat%d' % - color_mapping[style['border']]) - start = ''.join(buf) - if start: - outfile.write('{%s ' % start) - outfile.write(self._escape_text(value)) - if start: - outfile.write('}') - - outfile.write('}') diff --git a/spaces/aliabd/non-interactive-dataframe/app.py b/spaces/aliabd/non-interactive-dataframe/app.py deleted file mode 100644 index 5770f744180cce6a7aae72829489ca74e6b8052c..0000000000000000000000000000000000000000 --- a/spaces/aliabd/non-interactive-dataframe/app.py +++ /dev/null @@ -1,43 +0,0 @@ -import pandas as pd -import gradio as gr - -df = pd.read_csv("liked_images.csv") -df['url'] = df['url'].apply(lambda x: ' ') #' ') -df['seed'] = df['seed'].apply(lambda x: str(x)) -df['width'] = df['width'].apply(lambda x: str(x)) -df['height'] = df['height'].apply(lambda x: str(x)) -df['steps'] = df['steps'].apply(lambda x: str(x)) -df['source'] = df['source'].apply(lambda x: str(x)) -df = df[[ 'url', 'prompt', 'seed', 'width', 'height', 'steps', 'source']] - -def display_df(): - df_images = df.head() - return df_images - -def display_next10(dataframe, end): - start = (end or dataframe.index[-1]) + 1 - end = start + 9 - df_images = df.loc[start:end] - return df_images, end - -#Gradio Blocks -with gr.Blocks() as demo: - gr.Markdown("

Utility Gradio Space for viewing PlaygroundAI Images

") - #gr.Markdown("""
""") - gr.Markdown( - """
This Tool helps you to analyze and inspect the images and corresponding prompts from Playground AI Images.
Suhail has recently shared an open dataset of all the liked images and their prompts from PlaygroundAI on Github here. This is an attempt to explore this dataset beautifully using the power and flexibility of Gradio!
To use the tool:First, click on the 'Initial' button, and then iteratively on the 'Next 10' button.
Bonus:Click on images to get the original PlaygroundAI image displayed on next tab
""") - - with gr.Row(): - num_end = gr.Number(visible=False) - b1 = gr.Button("Get Initial dataframe") - b2 = gr.Button("Next 10 Rows") - - with gr.Row(): - out_dataframe = gr.Dataframe(wrap=True, max_rows=10, overflow_row_behaviour= "paginate", datatype = ["markdown", "markdown", "str", "str", "str", "str", "str", "str"], interactive=False) - - b1.click(fn=display_df, outputs=out_dataframe) - b2.click(fn=display_next10, inputs= [out_dataframe, num_end ], outputs=[out_dataframe, num_end]) - - gr.Markdown("
Please note that the Playground AI dataset shared on GitHub doesn't have images but links to those images. The idea is to get the maximum benefit out of this dataset and to find the best way to explore this dataset. Gradio enables us to embed markdowns within a dataframe, thus this app is able to display actual images instead of direct links(meh!). I hope you will have as much fun playing with this Space as I had building it.
") - -demo.launch(debug=True, show_error=True) \ No newline at end of file diff --git a/spaces/allknowingroger/huggingface/assets/index-7f4d6bd2.css b/spaces/allknowingroger/huggingface/assets/index-7f4d6bd2.css deleted file mode 100644 index 098ae1f1bce10863773ac288c65b5b85a125a065..0000000000000000000000000000000000000000 --- a/spaces/allknowingroger/huggingface/assets/index-7f4d6bd2.css +++ /dev/null @@ -1 +0,0 @@ -*,:before,:after{box-sizing:border-box;border-width:0;border-style:solid;border-color:#e5e7eb}:before,:after{--tw-content: ""}html{line-height:1.5;-webkit-text-size-adjust:100%;-moz-tab-size:4;-o-tab-size:4;tab-size:4;font-family:ui-sans-serif,system-ui,-apple-system,BlinkMacSystemFont,Segoe UI,Roboto,Helvetica Neue,Arial,Noto Sans,sans-serif,"Apple Color Emoji","Segoe UI Emoji",Segoe UI Symbol,"Noto Color Emoji";font-feature-settings:normal;font-variation-settings:normal}body{margin:0;line-height:inherit}hr{height:0;color:inherit;border-top-width:1px}abbr:where([title]){-webkit-text-decoration:underline dotted;text-decoration:underline dotted}h1,h2,h3,h4,h5,h6{font-size:inherit;font-weight:inherit}a{color:inherit;text-decoration:inherit}b,strong{font-weight:bolder}code,kbd,samp,pre{font-family:ui-monospace,SFMono-Regular,Menlo,Monaco,Consolas,Liberation Mono,Courier New,monospace;font-size:1em}small{font-size:80%}sub,sup{font-size:75%;line-height:0;position:relative;vertical-align:baseline}sub{bottom:-.25em}sup{top:-.5em}table{text-indent:0;border-color:inherit;border-collapse:collapse}button,input,optgroup,select,textarea{font-family:inherit;font-size:100%;font-weight:inherit;line-height:inherit;color:inherit;margin:0;padding:0}button,select{text-transform:none}button,[type=button],[type=reset],[type=submit]{-webkit-appearance:button;background-color:transparent;background-image:none}:-moz-focusring{outline:auto}:-moz-ui-invalid{box-shadow:none}progress{vertical-align:baseline}::-webkit-inner-spin-button,::-webkit-outer-spin-button{height:auto}[type=search]{-webkit-appearance:textfield;outline-offset:-2px}::-webkit-search-decoration{-webkit-appearance:none}::-webkit-file-upload-button{-webkit-appearance:button;font:inherit}summary{display:list-item}blockquote,dl,dd,h1,h2,h3,h4,h5,h6,hr,figure,p,pre{margin:0}fieldset{margin:0;padding:0}legend{padding:0}ol,ul,menu{list-style:none;margin:0;padding:0}textarea{resize:vertical}input::-moz-placeholder,textarea::-moz-placeholder{opacity:1;color:#9ca3af}input::placeholder,textarea::placeholder{opacity:1;color:#9ca3af}button,[role=button]{cursor:pointer}:disabled{cursor:default}img,svg,video,canvas,audio,iframe,embed,object{display:block;vertical-align:middle}img,video{max-width:100%;height:auto}[hidden]{display:none}*,:before,:after{--tw-border-spacing-x: 0;--tw-border-spacing-y: 0;--tw-translate-x: 0;--tw-translate-y: 0;--tw-rotate: 0;--tw-skew-x: 0;--tw-skew-y: 0;--tw-scale-x: 1;--tw-scale-y: 1;--tw-pan-x: ;--tw-pan-y: ;--tw-pinch-zoom: ;--tw-scroll-snap-strictness: proximity;--tw-gradient-from-position: ;--tw-gradient-via-position: ;--tw-gradient-to-position: ;--tw-ordinal: ;--tw-slashed-zero: ;--tw-numeric-figure: ;--tw-numeric-spacing: ;--tw-numeric-fraction: ;--tw-ring-inset: ;--tw-ring-offset-width: 0px;--tw-ring-offset-color: #fff;--tw-ring-color: rgb(59 130 246 / .5);--tw-ring-offset-shadow: 0 0 #0000;--tw-ring-shadow: 0 0 #0000;--tw-shadow: 0 0 #0000;--tw-shadow-colored: 0 0 #0000;--tw-blur: ;--tw-brightness: ;--tw-contrast: ;--tw-grayscale: ;--tw-hue-rotate: ;--tw-invert: ;--tw-saturate: ;--tw-sepia: ;--tw-drop-shadow: ;--tw-backdrop-blur: ;--tw-backdrop-brightness: ;--tw-backdrop-contrast: ;--tw-backdrop-grayscale: ;--tw-backdrop-hue-rotate: ;--tw-backdrop-invert: ;--tw-backdrop-opacity: ;--tw-backdrop-saturate: ;--tw-backdrop-sepia: }::backdrop{--tw-border-spacing-x: 0;--tw-border-spacing-y: 0;--tw-translate-x: 0;--tw-translate-y: 0;--tw-rotate: 0;--tw-skew-x: 0;--tw-skew-y: 0;--tw-scale-x: 1;--tw-scale-y: 1;--tw-pan-x: ;--tw-pan-y: ;--tw-pinch-zoom: ;--tw-scroll-snap-strictness: proximity;--tw-gradient-from-position: ;--tw-gradient-via-position: ;--tw-gradient-to-position: ;--tw-ordinal: ;--tw-slashed-zero: ;--tw-numeric-figure: ;--tw-numeric-spacing: ;--tw-numeric-fraction: ;--tw-ring-inset: ;--tw-ring-offset-width: 0px;--tw-ring-offset-color: #fff;--tw-ring-color: rgb(59 130 246 / .5);--tw-ring-offset-shadow: 0 0 #0000;--tw-ring-shadow: 0 0 #0000;--tw-shadow: 0 0 #0000;--tw-shadow-colored: 0 0 #0000;--tw-blur: ;--tw-brightness: ;--tw-contrast: ;--tw-grayscale: ;--tw-hue-rotate: ;--tw-invert: ;--tw-saturate: ;--tw-sepia: ;--tw-drop-shadow: ;--tw-backdrop-blur: ;--tw-backdrop-brightness: ;--tw-backdrop-contrast: ;--tw-backdrop-grayscale: ;--tw-backdrop-hue-rotate: ;--tw-backdrop-invert: ;--tw-backdrop-opacity: ;--tw-backdrop-saturate: ;--tw-backdrop-sepia: }.container{width:100%}@media (min-width: 640px){.container{max-width:640px}}@media (min-width: 768px){.container{max-width:768px}}@media (min-width: 1024px){.container{max-width:1024px}}@media (min-width: 1280px){.container{max-width:1280px}}@media (min-width: 1536px){.container{max-width:1536px}}.block{display:block}.flex{display:flex}.table{display:table}.hidden{display:none}.h-full{height:100%}.min-h-screen{min-height:100vh}.w-2\/3{width:66.666667%}.w-full{width:100%}.cursor-not-allowed{cursor:not-allowed}.cursor-pointer{cursor:pointer}.cursor-wait{cursor:wait}.select-text{-webkit-user-select:text;-moz-user-select:text;user-select:text}.resize-none{resize:none}.flex-col{flex-direction:column}.items-center{align-items:center}.justify-center{justify-content:center}.space-y-12>:not([hidden])~:not([hidden]){--tw-space-y-reverse: 0;margin-top:calc(3rem * calc(1 - var(--tw-space-y-reverse)));margin-bottom:calc(3rem * var(--tw-space-y-reverse))}.overflow-auto{overflow:auto}.whitespace-pre-wrap{white-space:pre-wrap}.break-words{overflow-wrap:break-word}.border-4{border-width:4px}.border-yellow-200{--tw-border-opacity: 1;border-color:rgb(254 240 138 / var(--tw-border-opacity))}.bg-yellow-200{--tw-bg-opacity: 1;background-color:rgb(254 240 138 / var(--tw-bg-opacity))}.bg-yellow-500{--tw-bg-opacity: 1;background-color:rgb(234 179 8 / var(--tw-bg-opacity))}.p-6{padding:1.5rem}.py-24{padding-top:6rem;padding-bottom:6rem}.py-6{padding-top:1.5rem;padding-bottom:1.5rem}.text-center{text-align:center}.text-6xl{font-size:3.75rem;line-height:1}.text-xl{font-size:1.25rem;line-height:1.75rem}.font-bold{font-weight:700}.text-yellow-200{--tw-text-opacity: 1;color:rgb(254 240 138 / var(--tw-text-opacity))}.opacity-50{opacity:.5}*,*:before,*:after{box-sizing:inherit;-webkit-user-select:inherit;-moz-user-select:inherit;user-select:inherit}html,body,#root{box-sizing:border-box;height:100%;min-height:100vh;width:100%;min-width:100vw;margin:0;padding:0;-webkit-user-select:none;-moz-user-select:none;user-select:none}input::-webkit-file-upload-button{display:none}@media (min-width: 1024px){.lg\:w-1\/3{width:33.333333%}} diff --git a/spaces/amankishore/sjc/my/utils/seed.py b/spaces/amankishore/sjc/my/utils/seed.py deleted file mode 100644 index e3e81fad6c7610d11ec8d847f9a61a4e6675ecc4..0000000000000000000000000000000000000000 --- a/spaces/amankishore/sjc/my/utils/seed.py +++ /dev/null @@ -1,21 +0,0 @@ -# from pytorch lightning -import random -import numpy as np -import torch - -max_seed_value = np.iinfo(np.uint32).max -min_seed_value = np.iinfo(np.uint32).min - - -def seed_everything(seed=None): - seed = int(seed) - - if not (min_seed_value <= seed <= max_seed_value): - raise ValueError(f"{seed} is not in bounds, numpy accepts from {min_seed_value} to {max_seed_value}") - - print(f"seed set to {seed}") - random.seed(seed) - np.random.seed(seed) - torch.manual_seed(seed) - torch.cuda.manual_seed_all(seed) - return seed diff --git a/spaces/amarchheda/ChordDuplicate/portaudio/examples/paex_saw.c b/spaces/amarchheda/ChordDuplicate/portaudio/examples/paex_saw.c deleted file mode 100644 index caec0b02d7e02410bef484d06ca4733a06747bab..0000000000000000000000000000000000000000 --- a/spaces/amarchheda/ChordDuplicate/portaudio/examples/paex_saw.c +++ /dev/null @@ -1,133 +0,0 @@ -/** @file paex_saw.c - @ingroup examples_src - @brief Play a simple (aliasing) sawtooth wave. - @author Phil Burk http://www.softsynth.com -*/ -/* - * $Id$ - * - * This program uses the PortAudio Portable Audio Library. - * For more information see: http://www.portaudio.com - * Copyright (c) 1999-2000 Ross Bencina and Phil Burk - * - * Permission is hereby granted, free of charge, to any person obtaining - * a copy of this software and associated documentation files - * (the "Software"), to deal in the Software without restriction, - * including without limitation the rights to use, copy, modify, merge, - * publish, distribute, sublicense, and/or sell copies of the Software, - * and to permit persons to whom the Software is furnished to do so, - * subject to the following conditions: - * - * The above copyright notice and this permission notice shall be - * included in all copies or substantial portions of the Software. - * - * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, - * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF - * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. - * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR - * ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF - * CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION - * WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - */ - -/* - * The text above constitutes the entire PortAudio license; however, - * the PortAudio community also makes the following non-binding requests: - * - * Any person wishing to distribute modifications to the Software is - * requested to send the modifications to the original developer so that - * they can be incorporated into the canonical version. It is also - * requested that these non-binding requests be included along with the - * license above. - */ - -#include -#include -#include "portaudio.h" -#define NUM_SECONDS (4) -#define SAMPLE_RATE (44100) - -typedef struct -{ - float left_phase; - float right_phase; -} -paTestData; - -/* This routine will be called by the PortAudio engine when audio is needed. -** It may called at interrupt level on some machines so don't do anything -** that could mess up the system like calling malloc() or free(). -*/ -static int patestCallback( const void *inputBuffer, void *outputBuffer, - unsigned long framesPerBuffer, - const PaStreamCallbackTimeInfo* timeInfo, - PaStreamCallbackFlags statusFlags, - void *userData ) -{ - /* Cast data passed through stream to our structure. */ - paTestData *data = (paTestData*)userData; - float *out = (float*)outputBuffer; - unsigned int i; - (void) inputBuffer; /* Prevent unused variable warning. */ - - for( i=0; ileft_phase; /* left */ - *out++ = data->right_phase; /* right */ - /* Generate simple sawtooth phaser that ranges between -1.0 and 1.0. */ - data->left_phase += 0.01f; - /* When signal reaches top, drop back down. */ - if( data->left_phase >= 1.0f ) data->left_phase -= 2.0f; - /* higher pitch so we can distinguish left and right. */ - data->right_phase += 0.03f; - if( data->right_phase >= 1.0f ) data->right_phase -= 2.0f; - } - return 0; -} - -/*******************************************************************/ -static paTestData data; -int main(void); -int main(void) -{ - PaStream *stream; - PaError err; - - printf("PortAudio Test: output sawtooth wave.\n"); - /* Initialize our data for use by callback. */ - data.left_phase = data.right_phase = 0.0; - /* Initialize library before making any other calls. */ - err = Pa_Initialize(); - if( err != paNoError ) goto error; - - /* Open an audio I/O stream. */ - err = Pa_OpenDefaultStream( &stream, - 0, /* no input channels */ - 2, /* stereo output */ - paFloat32, /* 32 bit floating point output */ - SAMPLE_RATE, - 256, /* frames per buffer */ - patestCallback, - &data ); - if( err != paNoError ) goto error; - - err = Pa_StartStream( stream ); - if( err != paNoError ) goto error; - - /* Sleep for several seconds. */ - Pa_Sleep(NUM_SECONDS*1000); - - err = Pa_StopStream( stream ); - if( err != paNoError ) goto error; - err = Pa_CloseStream( stream ); - if( err != paNoError ) goto error; - Pa_Terminate(); - printf("Test finished.\n"); - return err; -error: - Pa_Terminate(); - fprintf( stderr, "An error occurred while using the portaudio stream\n" ); - fprintf( stderr, "Error number: %d\n", err ); - fprintf( stderr, "Error message: %s\n", Pa_GetErrorText( err ) ); - return err; -} diff --git a/spaces/artificialguybr/video-dubbing/TTS/tests/text_tests/__init__.py b/spaces/artificialguybr/video-dubbing/TTS/tests/text_tests/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/arxify/RVC-beta-v2-0618/config.py b/spaces/arxify/RVC-beta-v2-0618/config.py deleted file mode 100644 index 48187f530663fbe051585e0e2e37dbd06fd7f8ea..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/config.py +++ /dev/null @@ -1,123 +0,0 @@ -import argparse -import torch -from multiprocessing import cpu_count - - -def config_file_change_fp32(): - for config_file in ["32k.json", "40k.json", "48k.json"]: - with open(f"configs/{config_file}", "r") as f: - strr = f.read().replace("true", "false") - with open(f"configs/{config_file}", "w") as f: - f.write(strr) - with open("trainset_preprocess_pipeline_print.py", "r") as f: - strr = f.read().replace("3.7", "3.0") - with open("trainset_preprocess_pipeline_print.py", "w") as f: - f.write(strr) - - -class Config: - def __init__(self): - self.device = "cuda:0" - self.is_half = True - self.n_cpu = 0 - self.gpu_name = None - self.gpu_mem = None - ( - self.python_cmd, - self.listen_port, - self.iscolab, - self.noparallel, - self.noautoopen, - ) = self.arg_parse() - self.x_pad, self.x_query, self.x_center, self.x_max = self.device_config() - - @staticmethod - def arg_parse() -> tuple: - parser = argparse.ArgumentParser() - parser.add_argument("--port", type=int, default=7865, help="Listen port") - parser.add_argument( - "--pycmd", type=str, default="python", help="Python command" - ) - parser.add_argument("--colab", action="store_true", help="Launch in colab") - parser.add_argument( - "--noparallel", action="store_true", help="Disable parallel processing" - ) - parser.add_argument( - "--noautoopen", - action="store_true", - help="Do not open in browser automatically", - ) - cmd_opts = parser.parse_args() - - cmd_opts.port = cmd_opts.port if 0 <= cmd_opts.port <= 65535 else 7865 - - return ( - cmd_opts.pycmd, - cmd_opts.port, - cmd_opts.colab, - cmd_opts.noparallel, - cmd_opts.noautoopen, - ) - - def device_config(self) -> tuple: - if torch.cuda.is_available(): - i_device = int(self.device.split(":")[-1]) - self.gpu_name = torch.cuda.get_device_name(i_device) - if ( - ("16" in self.gpu_name and "V100" not in self.gpu_name.upper()) - or "P40" in self.gpu_name.upper() - or "1060" in self.gpu_name - or "1070" in self.gpu_name - or "1080" in self.gpu_name - ): - print("16系/10系显卡和P40强制单精度") - self.is_half = False - config_file_change_fp32() - else: - self.gpu_name = None - self.gpu_mem = int( - torch.cuda.get_device_properties(i_device).total_memory - / 1024 - / 1024 - / 1024 - + 0.4 - ) - if self.gpu_mem <= 4: - with open("trainset_preprocess_pipeline_print.py", "r") as f: - strr = f.read().replace("3.7", "3.0") - with open("trainset_preprocess_pipeline_print.py", "w") as f: - f.write(strr) - elif torch.backends.mps.is_available(): - print("没有发现支持的N卡, 使用MPS进行推理") - self.device = "mps" - self.is_half = False - config_file_change_fp32() - else: - print("没有发现支持的N卡, 使用CPU进行推理") - self.device = "cpu" - self.is_half = False - config_file_change_fp32() - - if self.n_cpu == 0: - self.n_cpu = cpu_count() - - if self.is_half: - # 6G显存配置 - x_pad = 3 - x_query = 10 - x_center = 60 - x_max = 65 - else: - # 5G显存配置 - x_pad = 1 - x_query = 6 - x_center = 38 - x_max = 41 - - if self.gpu_mem != None and self.gpu_mem <= 4: - x_pad = 1 - x_query = 5 - x_center = 30 - x_max = 32 - - return x_pad, x_query, x_center, x_max diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Crypto/Cipher/ChaCha20.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Crypto/Cipher/ChaCha20.py deleted file mode 100644 index 9bd2252ce7297f18ba3c1a1d62aa748cc474c5f1..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Crypto/Cipher/ChaCha20.py +++ /dev/null @@ -1,287 +0,0 @@ -# =================================================================== -# -# Copyright (c) 2014, Legrandin -# All rights reserved. -# -# Redistribution and use in source and binary forms, with or without -# modification, are permitted provided that the following conditions -# are met: -# -# 1. Redistributions of source code must retain the above copyright -# notice, this list of conditions and the following disclaimer. -# 2. Redistributions in binary form must reproduce the above copyright -# notice, this list of conditions and the following disclaimer in -# the documentation and/or other materials provided with the -# distribution. -# -# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS -# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT -# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS -# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE -# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, -# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, -# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; -# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER -# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT -# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN -# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE -# POSSIBILITY OF SUCH DAMAGE. -# =================================================================== - -from Crypto.Random import get_random_bytes - -from Crypto.Util.py3compat import _copy_bytes -from Crypto.Util._raw_api import (load_pycryptodome_raw_lib, - create_string_buffer, - get_raw_buffer, VoidPointer, - SmartPointer, c_size_t, - c_uint8_ptr, c_ulong, - is_writeable_buffer) - -_raw_chacha20_lib = load_pycryptodome_raw_lib("Crypto.Cipher._chacha20", - """ - int chacha20_init(void **pState, - const uint8_t *key, - size_t keySize, - const uint8_t *nonce, - size_t nonceSize); - - int chacha20_destroy(void *state); - - int chacha20_encrypt(void *state, - const uint8_t in[], - uint8_t out[], - size_t len); - - int chacha20_seek(void *state, - unsigned long block_high, - unsigned long block_low, - unsigned offset); - int hchacha20( const uint8_t key[32], - const uint8_t nonce16[16], - uint8_t subkey[32]); - """) - - -def _HChaCha20(key, nonce): - - assert(len(key) == 32) - assert(len(nonce) == 16) - - subkey = bytearray(32) - result = _raw_chacha20_lib.hchacha20( - c_uint8_ptr(key), - c_uint8_ptr(nonce), - c_uint8_ptr(subkey)) - if result: - raise ValueError("Error %d when deriving subkey with HChaCha20" % result) - - return subkey - - -class ChaCha20Cipher(object): - """ChaCha20 (or XChaCha20) cipher object. - Do not create it directly. Use :py:func:`new` instead. - - :var nonce: The nonce with length 8, 12 or 24 bytes - :vartype nonce: bytes - """ - - block_size = 1 - - def __init__(self, key, nonce): - """Initialize a ChaCha20/XChaCha20 cipher object - - See also `new()` at the module level.""" - - self.nonce = _copy_bytes(None, None, nonce) - - # XChaCha20 requires a key derivation with HChaCha20 - # See 2.3 in https://tools.ietf.org/html/draft-arciszewski-xchacha-03 - if len(nonce) == 24: - key = _HChaCha20(key, nonce[:16]) - nonce = b'\x00' * 4 + nonce[16:] - self._name = "XChaCha20" - else: - self._name = "ChaCha20" - nonce = self.nonce - - self._next = ( self.encrypt, self.decrypt ) - - self._state = VoidPointer() - result = _raw_chacha20_lib.chacha20_init( - self._state.address_of(), - c_uint8_ptr(key), - c_size_t(len(key)), - nonce, - c_size_t(len(nonce))) - if result: - raise ValueError("Error %d instantiating a %s cipher" % (result, - self._name)) - self._state = SmartPointer(self._state.get(), - _raw_chacha20_lib.chacha20_destroy) - - def encrypt(self, plaintext, output=None): - """Encrypt a piece of data. - - Args: - plaintext(bytes/bytearray/memoryview): The data to encrypt, of any size. - Keyword Args: - output(bytes/bytearray/memoryview): The location where the ciphertext - is written to. If ``None``, the ciphertext is returned. - Returns: - If ``output`` is ``None``, the ciphertext is returned as ``bytes``. - Otherwise, ``None``. - """ - - if self.encrypt not in self._next: - raise TypeError("Cipher object can only be used for decryption") - self._next = ( self.encrypt, ) - return self._encrypt(plaintext, output) - - def _encrypt(self, plaintext, output): - """Encrypt without FSM checks""" - - if output is None: - ciphertext = create_string_buffer(len(plaintext)) - else: - ciphertext = output - - if not is_writeable_buffer(output): - raise TypeError("output must be a bytearray or a writeable memoryview") - - if len(plaintext) != len(output): - raise ValueError("output must have the same length as the input" - " (%d bytes)" % len(plaintext)) - - result = _raw_chacha20_lib.chacha20_encrypt( - self._state.get(), - c_uint8_ptr(plaintext), - c_uint8_ptr(ciphertext), - c_size_t(len(plaintext))) - if result: - raise ValueError("Error %d while encrypting with %s" % (result, self._name)) - - if output is None: - return get_raw_buffer(ciphertext) - else: - return None - - def decrypt(self, ciphertext, output=None): - """Decrypt a piece of data. - - Args: - ciphertext(bytes/bytearray/memoryview): The data to decrypt, of any size. - Keyword Args: - output(bytes/bytearray/memoryview): The location where the plaintext - is written to. If ``None``, the plaintext is returned. - Returns: - If ``output`` is ``None``, the plaintext is returned as ``bytes``. - Otherwise, ``None``. - """ - - if self.decrypt not in self._next: - raise TypeError("Cipher object can only be used for encryption") - self._next = ( self.decrypt, ) - - try: - return self._encrypt(ciphertext, output) - except ValueError as e: - raise ValueError(str(e).replace("enc", "dec")) - - def seek(self, position): - """Seek to a certain position in the key stream. - - Args: - position (integer): - The absolute position within the key stream, in bytes. - """ - - position, offset = divmod(position, 64) - block_low = position & 0xFFFFFFFF - block_high = position >> 32 - - result = _raw_chacha20_lib.chacha20_seek( - self._state.get(), - c_ulong(block_high), - c_ulong(block_low), - offset - ) - if result: - raise ValueError("Error %d while seeking with %s" % (result, self._name)) - - -def _derive_Poly1305_key_pair(key, nonce): - """Derive a tuple (r, s, nonce) for a Poly1305 MAC. - - If nonce is ``None``, a new 12-byte nonce is generated. - """ - - if len(key) != 32: - raise ValueError("Poly1305 with ChaCha20 requires a 32-byte key") - - if nonce is None: - padded_nonce = nonce = get_random_bytes(12) - elif len(nonce) == 8: - # See RFC7538, 2.6: [...] ChaCha20 as specified here requires a 96-bit - # nonce. So if the provided nonce is only 64-bit, then the first 32 - # bits of the nonce will be set to a constant number. - # This will usually be zero, but for protocols with multiple senders it may be - # different for each sender, but should be the same for all - # invocations of the function with the same key by a particular - # sender. - padded_nonce = b'\x00\x00\x00\x00' + nonce - elif len(nonce) == 12: - padded_nonce = nonce - else: - raise ValueError("Poly1305 with ChaCha20 requires an 8- or 12-byte nonce") - - rs = new(key=key, nonce=padded_nonce).encrypt(b'\x00' * 32) - return rs[:16], rs[16:], nonce - - -def new(**kwargs): - """Create a new ChaCha20 or XChaCha20 cipher - - Keyword Args: - key (bytes/bytearray/memoryview): The secret key to use. - It must be 32 bytes long. - nonce (bytes/bytearray/memoryview): A mandatory value that - must never be reused for any other encryption - done with this key. - - For ChaCha20, it must be 8 or 12 bytes long. - - For XChaCha20, it must be 24 bytes long. - - If not provided, 8 bytes will be randomly generated - (you can find them back in the ``nonce`` attribute). - - :Return: a :class:`Crypto.Cipher.ChaCha20.ChaCha20Cipher` object - """ - - try: - key = kwargs.pop("key") - except KeyError as e: - raise TypeError("Missing parameter %s" % e) - - nonce = kwargs.pop("nonce", None) - if nonce is None: - nonce = get_random_bytes(8) - - if len(key) != 32: - raise ValueError("ChaCha20/XChaCha20 key must be 32 bytes long") - - if len(nonce) not in (8, 12, 24): - raise ValueError("Nonce must be 8/12 bytes(ChaCha20) or 24 bytes (XChaCha20)") - - if kwargs: - raise TypeError("Unknown parameters: " + str(kwargs)) - - return ChaCha20Cipher(key, nonce) - -# Size of a data block (in bytes) -block_size = 1 - -# Size of a key (in bytes) -key_size = 32 diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Cython/Shadow.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Cython/Shadow.py deleted file mode 100644 index cc8c9b60ad5d5191f5e9d17e0c56e32714bfe219..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Cython/Shadow.py +++ /dev/null @@ -1,474 +0,0 @@ -# cython.* namespace for pure mode. -from __future__ import absolute_import - -__version__ = "0.29.30" - -try: - from __builtin__ import basestring -except ImportError: - basestring = str - - -# BEGIN shameless copy from Cython/minivect/minitypes.py - -class _ArrayType(object): - - is_array = True - subtypes = ['dtype'] - - def __init__(self, dtype, ndim, is_c_contig=False, is_f_contig=False, - inner_contig=False, broadcasting=None): - self.dtype = dtype - self.ndim = ndim - self.is_c_contig = is_c_contig - self.is_f_contig = is_f_contig - self.inner_contig = inner_contig or is_c_contig or is_f_contig - self.broadcasting = broadcasting - - def __repr__(self): - axes = [":"] * self.ndim - if self.is_c_contig: - axes[-1] = "::1" - elif self.is_f_contig: - axes[0] = "::1" - - return "%s[%s]" % (self.dtype, ", ".join(axes)) - - -def index_type(base_type, item): - """ - Support array type creation by slicing, e.g. double[:, :] specifies - a 2D strided array of doubles. The syntax is the same as for - Cython memoryviews. - """ - class InvalidTypeSpecification(Exception): - pass - - def verify_slice(s): - if s.start or s.stop or s.step not in (None, 1): - raise InvalidTypeSpecification( - "Only a step of 1 may be provided to indicate C or " - "Fortran contiguity") - - if isinstance(item, tuple): - step_idx = None - for idx, s in enumerate(item): - verify_slice(s) - if s.step and (step_idx or idx not in (0, len(item) - 1)): - raise InvalidTypeSpecification( - "Step may only be provided once, and only in the " - "first or last dimension.") - - if s.step == 1: - step_idx = idx - - return _ArrayType(base_type, len(item), - is_c_contig=step_idx == len(item) - 1, - is_f_contig=step_idx == 0) - elif isinstance(item, slice): - verify_slice(item) - return _ArrayType(base_type, 1, is_c_contig=bool(item.step)) - else: - # int[8] etc. - assert int(item) == item # array size must be a plain integer - array(base_type, item) - -# END shameless copy - - -compiled = False - -_Unspecified = object() - -# Function decorators - -def _empty_decorator(x): - return x - -def locals(**arg_types): - return _empty_decorator - -def test_assert_path_exists(*paths): - return _empty_decorator - -def test_fail_if_path_exists(*paths): - return _empty_decorator - -class _EmptyDecoratorAndManager(object): - def __call__(self, x): - return x - def __enter__(self): - pass - def __exit__(self, exc_type, exc_value, traceback): - pass - -class _Optimization(object): - pass - -cclass = ccall = cfunc = _EmptyDecoratorAndManager() - -returns = wraparound = boundscheck = initializedcheck = nonecheck = \ - embedsignature = cdivision = cdivision_warnings = \ - always_allows_keywords = profile = linetrace = infer_types = \ - unraisable_tracebacks = freelist = \ - lambda _: _EmptyDecoratorAndManager() - -exceptval = lambda _=None, check=True: _EmptyDecoratorAndManager() - -overflowcheck = lambda _: _EmptyDecoratorAndManager() -optimization = _Optimization() - -overflowcheck.fold = optimization.use_switch = \ - optimization.unpack_method_calls = lambda arg: _EmptyDecoratorAndManager() - -final = internal = type_version_tag = no_gc_clear = no_gc = _empty_decorator - -binding = lambda _: _empty_decorator - - -_cython_inline = None -def inline(f, *args, **kwds): - if isinstance(f, basestring): - global _cython_inline - if _cython_inline is None: - from Cython.Build.Inline import cython_inline as _cython_inline - return _cython_inline(f, *args, **kwds) - else: - assert len(args) == len(kwds) == 0 - return f - - -def compile(f): - from Cython.Build.Inline import RuntimeCompiledFunction - return RuntimeCompiledFunction(f) - - -# Special functions - -def cdiv(a, b): - q = a / b - if q < 0: - q += 1 - return q - -def cmod(a, b): - r = a % b - if (a*b) < 0: - r -= b - return r - - -# Emulated language constructs - -def cast(type, *args, **kwargs): - kwargs.pop('typecheck', None) - assert not kwargs - if hasattr(type, '__call__'): - return type(*args) - else: - return args[0] - -def sizeof(arg): - return 1 - -def typeof(arg): - return arg.__class__.__name__ - # return type(arg) - -def address(arg): - return pointer(type(arg))([arg]) - -def declare(type=None, value=_Unspecified, **kwds): - if type not in (None, object) and hasattr(type, '__call__'): - if value is not _Unspecified: - return type(value) - else: - return type() - else: - return value - -class _nogil(object): - """Support for 'with nogil' statement and @nogil decorator. - """ - def __call__(self, x): - if callable(x): - # Used as function decorator => return the function unchanged. - return x - # Used as conditional context manager or to create an "@nogil(True/False)" decorator => keep going. - return self - - def __enter__(self): - pass - def __exit__(self, exc_class, exc, tb): - return exc_class is None - -nogil = _nogil() -gil = _nogil() -del _nogil - - -# Emulated types - -class CythonMetaType(type): - - def __getitem__(type, ix): - return array(type, ix) - -CythonTypeObject = CythonMetaType('CythonTypeObject', (object,), {}) - -class CythonType(CythonTypeObject): - - def _pointer(self, n=1): - for i in range(n): - self = pointer(self) - return self - -class PointerType(CythonType): - - def __init__(self, value=None): - if isinstance(value, (ArrayType, PointerType)): - self._items = [cast(self._basetype, a) for a in value._items] - elif isinstance(value, list): - self._items = [cast(self._basetype, a) for a in value] - elif value is None or value == 0: - self._items = [] - else: - raise ValueError - - def __getitem__(self, ix): - if ix < 0: - raise IndexError("negative indexing not allowed in C") - return self._items[ix] - - def __setitem__(self, ix, value): - if ix < 0: - raise IndexError("negative indexing not allowed in C") - self._items[ix] = cast(self._basetype, value) - - def __eq__(self, value): - if value is None and not self._items: - return True - elif type(self) != type(value): - return False - else: - return not self._items and not value._items - - def __repr__(self): - return "%s *" % (self._basetype,) - -class ArrayType(PointerType): - - def __init__(self): - self._items = [None] * self._n - - -class StructType(CythonType): - - def __init__(self, cast_from=_Unspecified, **data): - if cast_from is not _Unspecified: - # do cast - if len(data) > 0: - raise ValueError('Cannot accept keyword arguments when casting.') - if type(cast_from) is not type(self): - raise ValueError('Cannot cast from %s'%cast_from) - for key, value in cast_from.__dict__.items(): - setattr(self, key, value) - else: - for key, value in data.items(): - setattr(self, key, value) - - def __setattr__(self, key, value): - if key in self._members: - self.__dict__[key] = cast(self._members[key], value) - else: - raise AttributeError("Struct has no member '%s'" % key) - - -class UnionType(CythonType): - - def __init__(self, cast_from=_Unspecified, **data): - if cast_from is not _Unspecified: - # do type cast - if len(data) > 0: - raise ValueError('Cannot accept keyword arguments when casting.') - if isinstance(cast_from, dict): - datadict = cast_from - elif type(cast_from) is type(self): - datadict = cast_from.__dict__ - else: - raise ValueError('Cannot cast from %s'%cast_from) - else: - datadict = data - if len(datadict) > 1: - raise AttributeError("Union can only store one field at a time.") - for key, value in datadict.items(): - setattr(self, key, value) - - def __setattr__(self, key, value): - if key == '__dict__': - CythonType.__setattr__(self, key, value) - elif key in self._members: - self.__dict__ = {key: cast(self._members[key], value)} - else: - raise AttributeError("Union has no member '%s'" % key) - -def pointer(basetype): - class PointerInstance(PointerType): - _basetype = basetype - return PointerInstance - -def array(basetype, n): - class ArrayInstance(ArrayType): - _basetype = basetype - _n = n - return ArrayInstance - -def struct(**members): - class StructInstance(StructType): - _members = members - for key in members: - setattr(StructInstance, key, None) - return StructInstance - -def union(**members): - class UnionInstance(UnionType): - _members = members - for key in members: - setattr(UnionInstance, key, None) - return UnionInstance - -class typedef(CythonType): - - def __init__(self, type, name=None): - self._basetype = type - self.name = name - - def __call__(self, *arg): - value = cast(self._basetype, *arg) - return value - - def __repr__(self): - return self.name or str(self._basetype) - - __getitem__ = index_type - -class _FusedType(CythonType): - pass - - -def fused_type(*args): - if not args: - raise TypeError("Expected at least one type as argument") - - # Find the numeric type with biggest rank if all types are numeric - rank = -1 - for type in args: - if type not in (py_int, py_long, py_float, py_complex): - break - - if type_ordering.index(type) > rank: - result_type = type - else: - return result_type - - # Not a simple numeric type, return a fused type instance. The result - # isn't really meant to be used, as we can't keep track of the context in - # pure-mode. Casting won't do anything in this case. - return _FusedType() - - -def _specialized_from_args(signatures, args, kwargs): - "Perhaps this should be implemented in a TreeFragment in Cython code" - raise Exception("yet to be implemented") - - -py_int = typedef(int, "int") -try: - py_long = typedef(long, "long") -except NameError: # Py3 - py_long = typedef(int, "long") -py_float = typedef(float, "float") -py_complex = typedef(complex, "double complex") - - -# Predefined types - -int_types = ['char', 'short', 'Py_UNICODE', 'int', 'Py_UCS4', 'long', 'longlong', 'Py_ssize_t', 'size_t'] -float_types = ['longdouble', 'double', 'float'] -complex_types = ['longdoublecomplex', 'doublecomplex', 'floatcomplex', 'complex'] -other_types = ['bint', 'void', 'Py_tss_t'] - -to_repr = { - 'longlong': 'long long', - 'longdouble': 'long double', - 'longdoublecomplex': 'long double complex', - 'doublecomplex': 'double complex', - 'floatcomplex': 'float complex', -}.get - -gs = globals() - -# note: cannot simply name the unicode type here as 2to3 gets in the way and replaces it by str -try: - import __builtin__ as builtins -except ImportError: # Py3 - import builtins - -gs['unicode'] = typedef(getattr(builtins, 'unicode', str), 'unicode') -del builtins - -for name in int_types: - reprname = to_repr(name, name) - gs[name] = typedef(py_int, reprname) - if name not in ('Py_UNICODE', 'Py_UCS4') and not name.endswith('size_t'): - gs['u'+name] = typedef(py_int, "unsigned " + reprname) - gs['s'+name] = typedef(py_int, "signed " + reprname) - -for name in float_types: - gs[name] = typedef(py_float, to_repr(name, name)) - -for name in complex_types: - gs[name] = typedef(py_complex, to_repr(name, name)) - -bint = typedef(bool, "bint") -void = typedef(None, "void") -Py_tss_t = typedef(None, "Py_tss_t") - -for t in int_types + float_types + complex_types + other_types: - for i in range(1, 4): - gs["%s_%s" % ('p'*i, t)] = gs[t]._pointer(i) - -NULL = gs['p_void'](0) - -# looks like 'gs' has some users out there by now... -#del gs - -integral = floating = numeric = _FusedType() - -type_ordering = [py_int, py_long, py_float, py_complex] - -class CythonDotParallel(object): - """ - The cython.parallel module. - """ - - __all__ = ['parallel', 'prange', 'threadid'] - - def parallel(self, num_threads=None): - return nogil - - def prange(self, start=0, stop=None, step=1, nogil=False, schedule=None, chunksize=None, num_threads=None): - if stop is None: - stop = start - start = 0 - return range(start, stop, step) - - def threadid(self): - return 0 - - # def threadsavailable(self): - # return 1 - -import sys -sys.modules['cython.parallel'] = CythonDotParallel() -del sys diff --git a/spaces/asbeabi/PoCs/README.md b/spaces/asbeabi/PoCs/README.md deleted file mode 100644 index a7f18c5f6e7b22ca031dafe61d6005b150ff39ad..0000000000000000000000000000000000000000 --- a/spaces/asbeabi/PoCs/README.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -title: PoCs -emoji: 🦀 -colorFrom: green -colorTo: blue -sdk: static -pinned: false -license: openrail ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/atticus/image-text-retrival-huster/scripts/dataset.py b/spaces/atticus/image-text-retrival-huster/scripts/dataset.py deleted file mode 100644 index b52c732dfc9005a1186f7879b924098e2313120e..0000000000000000000000000000000000000000 --- a/spaces/atticus/image-text-retrival-huster/scripts/dataset.py +++ /dev/null @@ -1,178 +0,0 @@ -# make.texts.py -from __future__ import print_function -import os -import os.path as osp -from pycocotools.coco import COCO -# import gensim -# from gensim.models import Doc2Vec -import numpy as np -import scipy.io as sio -import os -import os.path as osp -from pycocotools.coco import COCO -import pprint -import os -import os.path as osp -import json -from nltk.tokenize import RegexpTokenizer -from tqdm import tqdm - -"""process texts -python 2 needed by `jhlau/doc2vec`, and COCO api CAN work with python 2.7. -So I choose to create a virtual env of python 2.7. - -dependencies: - matplotlib (COCO api) - smart_open (gensim) -""" - -# COCO 原本的 annotations 中就有各 classes 的 ID,但不连续(从 1 标到 90 但实际只有 80 个)。这里按原有的 category id 的升序重新定义连续的、0-based 的 class ID。 -# train 和 val 都包含所有类,所以这里只用 val set 处理。 -# 结果写入 class-name.COCO.txt - -def remake_classname(): - """process class order - Record the mapping between tightened/discretized 0-base class ID, - original class ID and class name in `class-name.COCO.txt`, - with format ` `. - - The class order is consistent to the ascending order of the original IDs. - """ - - COCO_P = "/dataset/coco" - ANNO_P = osp.join(COCO_P, "annotations") - SPLIT = ["val", "train"] - - for _split in SPLIT: - print("---", _split, "---") - anno_file = osp.join(ANNO_P, "instances_{}2017.json".format(_split)) - coco = COCO(anno_file) - cats = coco.loadCats(coco.getCatIds()) - # print(cats[0]) - cls_id = {c["name"]: c["id"] for c in cats} # 它本身就是按 category id 升序 - # pprint.pprint(cls_id) - with open("class-name.COCO.txt", "w") as f: - for new_id, c in enumerate(cls_id): - old_id = cls_id[c]# - 1 - cn = c.replace(" ", "_") - # format: - f.write("{} {} {}\n".format(new_id, old_id, cn)) - - break # 只用 val set - -def remake_idmap(): - # 合并 train、val 两个集合,统一按原本的 id(即 images 文件名中的数字,也是不连续的,且 train、val 无重合)升序重新排 0-based 的 data ID。 - # 结果写入 id-map.COCO.txt - # make.id-map.py - """discretization of the original file ID - Map the file ID to sequential {0, 1, ..., n}, - and record this mapping in `id-map.txt`, - with format ` `. - - Note that the new ids are 0-base. - """ - - TRAIN_P = "train2017" - VAL_P = "val2017" - - file_list = [f for f in os.listdir(os.path.join("/dataset/coco", TRAIN_P)) if (".jpg" in f)] - file_list.extend([f for f in os.listdir(os.path.join("/dataset/coco", VAL_P)) if (".jpg" in f)]) - print("#data:", len(file_list)) # 12,3287 - - id_key = lambda x: int(x.split(".jpg")[0]) - file_list = sorted(file_list, key=id_key) # 按 image ID 升序 - # print(file_list[:15]) - - with open("id-map.COCO.txt", "w") as f: - # format: - for i, f_name in enumerate(file_list): - _original_id = id_key(f_name) - f.write("{} {} {}\n".format(i, _original_id, f_name)) - # if i > 5: break - print("DONE") - - -# COCO -COCO_P = "/dataset/coco" -ANNO_P = osp.join(COCO_P, "annotations") -SPLIT = ["val", "train"] -# doc2vec -MODEL = "/home/dataset/Doc2Vec/enwiki_dbow/doc2vec.bin" -start_alpha = 0.01 -infer_epoch = 1000 -DIM = 300 # dimension of the doc2vec feature -# id_map_data = {} -# with open("id-map.txt", "r") as f: -# for line in f: -# line = line.strip() -# _new_id, _old_id, _ = line.split() -# id_map_data[int(_old_id)] = int(_new_id) -# N_DATA = len(id_map_data) -# print("#data:", N_DATA) - -# pre-trained Doc2Vec model -# model = Doc2Vec.load(MODEL) -tokenizer = RegexpTokenizer(r'\w+') -def dataset_format(filepath, filename, imgid, split, sentences, cocoid): - data = {} - data['filepath'] = filepath - data['sentids'] = [imgid * 5 + idx for idx in range(5)] - data['filename'] = filename - data['imgid'] = imgid - data['split'] = split - data['sentences'] = [{'tokens': tokenizer.tokenize(sentence), - 'raw': sentence, - 'imgid': imgid, - 'sentid': imgid * 5 + idx} - for idx, sentence in enumerate(sentences)] - data['cocoid'] = cocoid - return data - -dataset_anns = {} -dataset_anns['images'] = [] -dataset_anns['dataset'] = 'coco' -for __split in SPLIT: - print("---", __split, "---") - anno_file = osp.join(ANNO_P, "instances_{}2017.json".format(__split)) - caps_file = osp.join(ANNO_P, "captions_{}2017.json".format(__split)) - coco = COCO(anno_file) - coco_caps = COCO(caps_file) - new_image_id_file = open("id-map.COCO.txt", 'r') - new_img_id_map = {image_id.strip().split(" ")[2]: image_id.strip().split(" ")[0] for image_id in new_image_id_file.readlines()} - id_list = coco.getImgIds() - for _old_id in tqdm(id_list): - # _new_id = id_map_data[_old_id] - _annIds = coco_caps.getAnnIds(imgIds=_old_id) - _anns = coco_caps.loadAnns(_annIds) - - _filepath = __split + '2017' - _filename = coco.imgs[_old_id]['file_name'] - _imgid = int(new_img_id_map[_filename]) - _split = __split - # print(len(anns)) - # pprint.pprint(anns) - _sentences = [_a["caption"] for _a in _anns] - _cocoid = _old_id - formated_data = dataset_format(_filepath, _filename, _imgid, _split, _sentences, _cocoid) - dataset_anns['images'].append(formated_data) - # pprint.pprint(sentences) - # sentences = [gensim.utils.simple_preprocess(s) for s in sentences] - # pprint.pprint(sentences) - # doc = [] - # for s in sentences: - # doc.extend(s) - # print(doc) - # vec = model.infer_vector(doc) - # print(vec.shape) - # texts.append(vec[np.newaxis, :]) - # break - # break - -with open('dataset_anns.json', 'w') as fp: - json.dump(dataset_anns, fp) - -new_image_id_file.close() - -# texts = np.vstack(texts).astype(np.float32) -# print("texts:", texts.shape, texts.dtype) # (123287, 300) dtype(' - -# [Optional] Uncomment this line to install global node packages. -# RUN su vscode -c "source /usr/local/share/nvm/nvm.sh && npm install -g " 2>&1 diff --git a/spaces/banana-projects/web3d/node_modules/three/examples/js/loaders/VTKLoader.js b/spaces/banana-projects/web3d/node_modules/three/examples/js/loaders/VTKLoader.js deleted file mode 100644 index c4e319e472f3db60e99cf67751b5430ee992552f..0000000000000000000000000000000000000000 --- a/spaces/banana-projects/web3d/node_modules/three/examples/js/loaders/VTKLoader.js +++ /dev/null @@ -1,1162 +0,0 @@ -/** - * @author mrdoob / http://mrdoob.com/ - * @author Alex Pletzer - * - * Updated on 22.03.2017 - * VTK header is now parsed and used to extract all the compressed data - * @author Andrii Iudin https://github.com/andreyyudin - * @author Paul Kibet Korir https://github.com/polarise - * @author Sriram Somasundharam https://github.com/raamssundar - */ - -THREE.VTKLoader = function ( manager ) { - - this.manager = ( manager !== undefined ) ? manager : THREE.DefaultLoadingManager; - -}; - -Object.assign( THREE.VTKLoader.prototype, THREE.EventDispatcher.prototype, { - - load: function ( url, onLoad, onProgress, onError ) { - - var scope = this; - - var loader = new THREE.FileLoader( scope.manager ); - loader.setPath( scope.path ); - loader.setResponseType( 'arraybuffer' ); - loader.load( url, function ( text ) { - - onLoad( scope.parse( text ) ); - - }, onProgress, onError ); - - }, - - setPath: function ( value ) { - - this.path = value; - return this; - - }, - - parse: function ( data ) { - - function parseASCII( data ) { - - // connectivity of the triangles - var indices = []; - - // triangles vertices - var positions = []; - - // red, green, blue colors in the range 0 to 1 - var colors = []; - - // normal vector, one per vertex - var normals = []; - - var result; - - // pattern for reading vertices, 3 floats or integers - var pat3Floats = /(\-?\d+\.?[\d\-\+e]*)\s+(\-?\d+\.?[\d\-\+e]*)\s+(\-?\d+\.?[\d\-\+e]*)/g; - - // pattern for connectivity, an integer followed by any number of ints - // the first integer is the number of polygon nodes - var patConnectivity = /^(\d+)\s+([\s\d]*)/; - - // indicates start of vertex data section - var patPOINTS = /^POINTS /; - - // indicates start of polygon connectivity section - var patPOLYGONS = /^POLYGONS /; - - // indicates start of triangle strips section - var patTRIANGLE_STRIPS = /^TRIANGLE_STRIPS /; - - // POINT_DATA number_of_values - var patPOINT_DATA = /^POINT_DATA[ ]+(\d+)/; - - // CELL_DATA number_of_polys - var patCELL_DATA = /^CELL_DATA[ ]+(\d+)/; - - // Start of color section - var patCOLOR_SCALARS = /^COLOR_SCALARS[ ]+(\w+)[ ]+3/; - - // NORMALS Normals float - var patNORMALS = /^NORMALS[ ]+(\w+)[ ]+(\w+)/; - - var inPointsSection = false; - var inPolygonsSection = false; - var inTriangleStripSection = false; - var inPointDataSection = false; - var inCellDataSection = false; - var inColorSection = false; - var inNormalsSection = false; - - var lines = data.split( '\n' ); - - for ( var i in lines ) { - - var line = lines[ i ]; - - if ( inPointsSection ) { - - // get the vertices - while ( ( result = pat3Floats.exec( line ) ) !== null ) { - - var x = parseFloat( result[ 1 ] ); - var y = parseFloat( result[ 2 ] ); - var z = parseFloat( result[ 3 ] ); - positions.push( x, y, z ); - - } - - } else if ( inPolygonsSection ) { - - if ( ( result = patConnectivity.exec( line ) ) !== null ) { - - // numVertices i0 i1 i2 ... - var numVertices = parseInt( result[ 1 ] ); - var inds = result[ 2 ].split( /\s+/ ); - - if ( numVertices >= 3 ) { - - var i0 = parseInt( inds[ 0 ] ); - var i1, i2; - var k = 1; - // split the polygon in numVertices - 2 triangles - for ( var j = 0; j < numVertices - 2; ++ j ) { - - i1 = parseInt( inds[ k ] ); - i2 = parseInt( inds[ k + 1 ] ); - indices.push( i0, i1, i2 ); - k ++; - - } - - } - - } - - } else if ( inTriangleStripSection ) { - - if ( ( result = patConnectivity.exec( line ) ) !== null ) { - - // numVertices i0 i1 i2 ... - var numVertices = parseInt( result[ 1 ] ); - var inds = result[ 2 ].split( /\s+/ ); - - if ( numVertices >= 3 ) { - - var i0, i1, i2; - // split the polygon in numVertices - 2 triangles - for ( var j = 0; j < numVertices - 2; j ++ ) { - - if ( j % 2 === 1 ) { - - i0 = parseInt( inds[ j ] ); - i1 = parseInt( inds[ j + 2 ] ); - i2 = parseInt( inds[ j + 1 ] ); - indices.push( i0, i1, i2 ); - - } else { - - i0 = parseInt( inds[ j ] ); - i1 = parseInt( inds[ j + 1 ] ); - i2 = parseInt( inds[ j + 2 ] ); - indices.push( i0, i1, i2 ); - - } - - } - - } - - } - - } else if ( inPointDataSection || inCellDataSection ) { - - if ( inColorSection ) { - - // Get the colors - - while ( ( result = pat3Floats.exec( line ) ) !== null ) { - - var r = parseFloat( result[ 1 ] ); - var g = parseFloat( result[ 2 ] ); - var b = parseFloat( result[ 3 ] ); - colors.push( r, g, b ); - - } - - } else if ( inNormalsSection ) { - - // Get the normal vectors - - while ( ( result = pat3Floats.exec( line ) ) !== null ) { - - var nx = parseFloat( result[ 1 ] ); - var ny = parseFloat( result[ 2 ] ); - var nz = parseFloat( result[ 3 ] ); - normals.push( nx, ny, nz ); - - } - - } - - } - - if ( patPOLYGONS.exec( line ) !== null ) { - - inPolygonsSection = true; - inPointsSection = false; - inTriangleStripSection = false; - - } else if ( patPOINTS.exec( line ) !== null ) { - - inPolygonsSection = false; - inPointsSection = true; - inTriangleStripSection = false; - - } else if ( patTRIANGLE_STRIPS.exec( line ) !== null ) { - - inPolygonsSection = false; - inPointsSection = false; - inTriangleStripSection = true; - - } else if ( patPOINT_DATA.exec( line ) !== null ) { - - inPointDataSection = true; - inPointsSection = false; - inPolygonsSection = false; - inTriangleStripSection = false; - - } else if ( patCELL_DATA.exec( line ) !== null ) { - - inCellDataSection = true; - inPointsSection = false; - inPolygonsSection = false; - inTriangleStripSection = false; - - } else if ( patCOLOR_SCALARS.exec( line ) !== null ) { - - inColorSection = true; - inNormalsSection = false; - inPointsSection = false; - inPolygonsSection = false; - inTriangleStripSection = false; - - } else if ( patNORMALS.exec( line ) !== null ) { - - inNormalsSection = true; - inColorSection = false; - inPointsSection = false; - inPolygonsSection = false; - inTriangleStripSection = false; - - } - - } - - var geometry = new THREE.BufferGeometry(); - geometry.setIndex( indices ); - geometry.addAttribute( 'position', new THREE.Float32BufferAttribute( positions, 3 ) ); - - if ( normals.length === positions.length ) { - - geometry.addAttribute( 'normal', new THREE.Float32BufferAttribute( normals, 3 ) ); - - } - - if ( colors.length !== indices.length ) { - - // stagger - - if ( colors.length === positions.length ) { - - geometry.addAttribute( 'color', new THREE.Float32BufferAttribute( colors, 3 ) ); - - } - - } else { - - // cell - - geometry = geometry.toNonIndexed(); - var numTriangles = geometry.attributes.position.count / 3; - - if ( colors.length === ( numTriangles * 3 ) ) { - - var newColors = []; - - for ( var i = 0; i < numTriangles; i ++ ) { - - var r = colors[ 3 * i + 0 ]; - var g = colors[ 3 * i + 1 ]; - var b = colors[ 3 * i + 2 ]; - - newColors.push( r, g, b ); - newColors.push( r, g, b ); - newColors.push( r, g, b ); - - } - - geometry.addAttribute( 'color', new THREE.Float32BufferAttribute( newColors, 3 ) ); - - } - - } - - return geometry; - - } - - function parseBinary( data ) { - - var count, pointIndex, i, numberOfPoints, s; - var buffer = new Uint8Array( data ); - var dataView = new DataView( data ); - - // Points and normals, by default, are empty - var points = []; - var normals = []; - var indices = []; - - // Going to make a big array of strings - var vtk = []; - var index = 0; - - function findString( buffer, start ) { - - var index = start; - var c = buffer[ index ]; - var s = []; - while ( c !== 10 ) { - - s.push( String.fromCharCode( c ) ); - index ++; - c = buffer[ index ]; - - } - - return { start: start, - end: index, - next: index + 1, - parsedString: s.join( '' ) }; - - } - - var state, line; - - while ( true ) { - - // Get a string - state = findString( buffer, index ); - line = state.parsedString; - - if ( line.indexOf( 'POINTS' ) === 0 ) { - - vtk.push( line ); - // Add the points - numberOfPoints = parseInt( line.split( ' ' )[ 1 ], 10 ); - - // Each point is 3 4-byte floats - count = numberOfPoints * 4 * 3; - - points = new Float32Array( numberOfPoints * 3 ); - - pointIndex = state.next; - for ( i = 0; i < numberOfPoints; i ++ ) { - - points[ 3 * i ] = dataView.getFloat32( pointIndex, false ); - points[ 3 * i + 1 ] = dataView.getFloat32( pointIndex + 4, false ); - points[ 3 * i + 2 ] = dataView.getFloat32( pointIndex + 8, false ); - pointIndex = pointIndex + 12; - - } - // increment our next pointer - state.next = state.next + count + 1; - - } else if ( line.indexOf( 'TRIANGLE_STRIPS' ) === 0 ) { - - var numberOfStrips = parseInt( line.split( ' ' )[ 1 ], 10 ); - var size = parseInt( line.split( ' ' )[ 2 ], 10 ); - // 4 byte integers - count = size * 4; - - indices = new Uint32Array( 3 * size - 9 * numberOfStrips ); - var indicesIndex = 0; - - pointIndex = state.next; - for ( i = 0; i < numberOfStrips; i ++ ) { - - // For each strip, read the first value, then record that many more points - var indexCount = dataView.getInt32( pointIndex, false ); - var strip = []; - pointIndex += 4; - for ( s = 0; s < indexCount; s ++ ) { - - strip.push( dataView.getInt32( pointIndex, false ) ); - pointIndex += 4; - - } - - // retrieves the n-2 triangles from the triangle strip - for ( var j = 0; j < indexCount - 2; j ++ ) { - - if ( j % 2 ) { - - indices[ indicesIndex ++ ] = strip[ j ]; - indices[ indicesIndex ++ ] = strip[ j + 2 ]; - indices[ indicesIndex ++ ] = strip[ j + 1 ]; - - } else { - - - indices[ indicesIndex ++ ] = strip[ j ]; - indices[ indicesIndex ++ ] = strip[ j + 1 ]; - indices[ indicesIndex ++ ] = strip[ j + 2 ]; - - } - - } - - } - // increment our next pointer - state.next = state.next + count + 1; - - } else if ( line.indexOf( 'POLYGONS' ) === 0 ) { - - var numberOfStrips = parseInt( line.split( ' ' )[ 1 ], 10 ); - var size = parseInt( line.split( ' ' )[ 2 ], 10 ); - // 4 byte integers - count = size * 4; - - indices = new Uint32Array( 3 * size - 9 * numberOfStrips ); - var indicesIndex = 0; - - pointIndex = state.next; - for ( i = 0; i < numberOfStrips; i ++ ) { - - // For each strip, read the first value, then record that many more points - var indexCount = dataView.getInt32( pointIndex, false ); - var strip = []; - pointIndex += 4; - for ( s = 0; s < indexCount; s ++ ) { - - strip.push( dataView.getInt32( pointIndex, false ) ); - pointIndex += 4; - - } - - // divide the polygon in n-2 triangle - for ( var j = 1; j < indexCount - 1; j ++ ) { - - indices[ indicesIndex ++ ] = strip[ 0 ]; - indices[ indicesIndex ++ ] = strip[ j ]; - indices[ indicesIndex ++ ] = strip[ j + 1 ]; - - } - - } - // increment our next pointer - state.next = state.next + count + 1; - - } else if ( line.indexOf( 'POINT_DATA' ) === 0 ) { - - numberOfPoints = parseInt( line.split( ' ' )[ 1 ], 10 ); - - // Grab the next line - state = findString( buffer, state.next ); - - // Now grab the binary data - count = numberOfPoints * 4 * 3; - - normals = new Float32Array( numberOfPoints * 3 ); - pointIndex = state.next; - for ( i = 0; i < numberOfPoints; i ++ ) { - - normals[ 3 * i ] = dataView.getFloat32( pointIndex, false ); - normals[ 3 * i + 1 ] = dataView.getFloat32( pointIndex + 4, false ); - normals[ 3 * i + 2 ] = dataView.getFloat32( pointIndex + 8, false ); - pointIndex += 12; - - } - - // Increment past our data - state.next = state.next + count; - - } - - // Increment index - index = state.next; - - if ( index >= buffer.byteLength ) { - - break; - - } - - } - - var geometry = new THREE.BufferGeometry(); - geometry.setIndex( new THREE.BufferAttribute( indices, 1 ) ); - geometry.addAttribute( 'position', new THREE.BufferAttribute( points, 3 ) ); - - if ( normals.length === points.length ) { - - geometry.addAttribute( 'normal', new THREE.BufferAttribute( normals, 3 ) ); - - } - - return geometry; - - } - - function Float32Concat( first, second ) { - - var firstLength = first.length, result = new Float32Array( firstLength + second.length ); - - result.set( first ); - result.set( second, firstLength ); - - return result; - - } - - function Int32Concat( first, second ) { - - var firstLength = first.length, result = new Int32Array( firstLength + second.length ); - - result.set( first ); - result.set( second, firstLength ); - - return result; - - } - - function parseXML( stringFile ) { - - // Changes XML to JSON, based on https://davidwalsh.name/convert-xml-json - - function xmlToJson( xml ) { - - // Create the return object - var obj = {}; - - if ( xml.nodeType === 1 ) { // element - - // do attributes - - if ( xml.attributes ) { - - if ( xml.attributes.length > 0 ) { - - obj[ 'attributes' ] = {}; - - for ( var j = 0; j < xml.attributes.length; j ++ ) { - - var attribute = xml.attributes.item( j ); - obj[ 'attributes' ][ attribute.nodeName ] = attribute.nodeValue.trim(); - - } - - } - - } - - } else if ( xml.nodeType === 3 ) { // text - - obj = xml.nodeValue.trim(); - - } - - // do children - if ( xml.hasChildNodes() ) { - - for ( var i = 0; i < xml.childNodes.length; i ++ ) { - - var item = xml.childNodes.item( i ); - var nodeName = item.nodeName; - - if ( typeof obj[ nodeName ] === 'undefined' ) { - - var tmp = xmlToJson( item ); - - if ( tmp !== '' ) obj[ nodeName ] = tmp; - - } else { - - if ( typeof obj[ nodeName ].push === 'undefined' ) { - - var old = obj[ nodeName ]; - obj[ nodeName ] = [ old ]; - - } - - var tmp = xmlToJson( item ); - - if ( tmp !== '' ) obj[ nodeName ].push( tmp ); - - } - - } - - } - - return obj; - - } - - // Taken from Base64-js - function Base64toByteArray( b64 ) { - - var Arr = typeof Uint8Array !== 'undefined' ? Uint8Array : Array; - var i; - var lookup = []; - var revLookup = []; - var code = 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/'; - var len = code.length; - - for ( i = 0; i < len; i ++ ) { - - lookup[ i ] = code[ i ]; - - } - - for ( i = 0; i < len; ++ i ) { - - revLookup[ code.charCodeAt( i ) ] = i; - - } - - revLookup[ '-'.charCodeAt( 0 ) ] = 62; - revLookup[ '_'.charCodeAt( 0 ) ] = 63; - - var j, l, tmp, placeHolders, arr; - var len = b64.length; - - if ( len % 4 > 0 ) { - - throw new Error( 'Invalid string. Length must be a multiple of 4' ); - - } - - placeHolders = b64[ len - 2 ] === '=' ? 2 : b64[ len - 1 ] === '=' ? 1 : 0; - arr = new Arr( len * 3 / 4 - placeHolders ); - l = placeHolders > 0 ? len - 4 : len; - - var L = 0; - - for ( i = 0, j = 0; i < l; i += 4, j += 3 ) { - - tmp = ( revLookup[ b64.charCodeAt( i ) ] << 18 ) | ( revLookup[ b64.charCodeAt( i + 1 ) ] << 12 ) | ( revLookup[ b64.charCodeAt( i + 2 ) ] << 6 ) | revLookup[ b64.charCodeAt( i + 3 ) ]; - arr[ L ++ ] = ( tmp & 0xFF0000 ) >> 16; - arr[ L ++ ] = ( tmp & 0xFF00 ) >> 8; - arr[ L ++ ] = tmp & 0xFF; - - } - - if ( placeHolders === 2 ) { - - tmp = ( revLookup[ b64.charCodeAt( i ) ] << 2 ) | ( revLookup[ b64.charCodeAt( i + 1 ) ] >> 4 ); - arr[ L ++ ] = tmp & 0xFF; - - } else if ( placeHolders === 1 ) { - - tmp = ( revLookup[ b64.charCodeAt( i ) ] << 10 ) | ( revLookup[ b64.charCodeAt( i + 1 ) ] << 4 ) | ( revLookup[ b64.charCodeAt( i + 2 ) ] >> 2 ); - arr[ L ++ ] = ( tmp >> 8 ) & 0xFF; - arr[ L ++ ] = tmp & 0xFF; - - } - - return arr; - - } - - function parseDataArray( ele, compressed ) { - - var numBytes = 0; - - if ( json.attributes.header_type === 'UInt64' ) { - - numBytes = 8; - - } else if ( json.attributes.header_type === 'UInt32' ) { - - numBytes = 4; - - } - - - // Check the format - if ( ele.attributes.format === 'binary' && compressed ) { - - var rawData, content, byteData, blocks, cSizeStart, headerSize, padding, dataOffsets, currentOffset; - - if ( ele.attributes.type === 'Float32' ) { - - var txt = new Float32Array( ); - - } else if ( ele.attributes.type === 'Int64' ) { - - var txt = new Int32Array( ); - - } - - // VTP data with the header has the following structure: - // [#blocks][#u-size][#p-size][#c-size-1][#c-size-2]...[#c-size-#blocks][DATA] - // - // Each token is an integer value whose type is specified by "header_type" at the top of the file (UInt32 if no type specified). The token meanings are: - // [#blocks] = Number of blocks - // [#u-size] = Block size before compression - // [#p-size] = Size of last partial block (zero if it not needed) - // [#c-size-i] = Size in bytes of block i after compression - // - // The [DATA] portion stores contiguously every block appended together. The offset from the beginning of the data section to the beginning of a block is - // computed by summing the compressed block sizes from preceding blocks according to the header. - - rawData = ele[ '#text' ]; - - byteData = Base64toByteArray( rawData ); - - blocks = byteData[ 0 ]; - for ( var i = 1; i < numBytes - 1; i ++ ) { - - blocks = blocks | ( byteData[ i ] << ( i * numBytes ) ); - - } - - headerSize = ( blocks + 3 ) * numBytes; - padding = ( ( headerSize % 3 ) > 0 ) ? 3 - ( headerSize % 3 ) : 0; - headerSize = headerSize + padding; - - dataOffsets = []; - currentOffset = headerSize; - dataOffsets.push( currentOffset ); - - // Get the blocks sizes after the compression. - // There are three blocks before c-size-i, so we skip 3*numBytes - cSizeStart = 3 * numBytes; - - for ( var i = 0; i < blocks; i ++ ) { - - var currentBlockSize = byteData[ i * numBytes + cSizeStart ]; - - for ( var j = 1; j < numBytes - 1; j ++ ) { - - // Each data point consists of 8 bytes regardless of the header type - currentBlockSize = currentBlockSize | ( byteData[ i * numBytes + cSizeStart + j ] << ( j * 8 ) ); - - } - - currentOffset = currentOffset + currentBlockSize; - dataOffsets.push( currentOffset ); - - } - - for ( var i = 0; i < dataOffsets.length - 1; i ++ ) { - - var inflate = new Zlib.Inflate( byteData.slice( dataOffsets[ i ], dataOffsets[ i + 1 ] ), { resize: true, verify: true } ); // eslint-disable-line no-undef - content = inflate.decompress(); - content = content.buffer; - - if ( ele.attributes.type === 'Float32' ) { - - content = new Float32Array( content ); - txt = Float32Concat( txt, content ); - - } else if ( ele.attributes.type === 'Int64' ) { - - content = new Int32Array( content ); - txt = Int32Concat( txt, content ); - - } - - } - - delete ele[ '#text' ]; - - if ( ele.attributes.type === 'Int64' ) { - - if ( ele.attributes.format === 'binary' ) { - - txt = txt.filter( function ( el, idx ) { - - if ( idx % 2 !== 1 ) return true; - - } ); - - } - - } - - } else { - - if ( ele.attributes.format === 'binary' && ! compressed ) { - - var content = Base64toByteArray( ele[ '#text' ] ); - - // VTP data for the uncompressed case has the following structure: - // [#bytes][DATA] - // where "[#bytes]" is an integer value specifying the number of bytes in the block of data following it. - content = content.slice( numBytes ).buffer; - - } else { - - if ( ele[ '#text' ] ) { - - var content = ele[ '#text' ].split( /\s+/ ).filter( function ( el ) { - - if ( el !== '' ) return el; - - } ); - - } else { - - var content = new Int32Array( 0 ).buffer; - - } - - } - - delete ele[ '#text' ]; - - // Get the content and optimize it - if ( ele.attributes.type === 'Float32' ) { - - var txt = new Float32Array( content ); - - } else if ( ele.attributes.type === 'Int32' ) { - - var txt = new Int32Array( content ); - - } else if ( ele.attributes.type === 'Int64' ) { - - var txt = new Int32Array( content ); - - if ( ele.attributes.format === 'binary' ) { - - txt = txt.filter( function ( el, idx ) { - - if ( idx % 2 !== 1 ) return true; - - } ); - - } - - } - - } // endif ( ele.attributes.format === 'binary' && compressed ) - - return txt; - - } - - // Main part - // Get Dom - var dom = null; - - if ( window.DOMParser ) { - - try { - - dom = ( new DOMParser() ).parseFromString( stringFile, 'text/xml' ); - - } catch ( e ) { - - dom = null; - - } - - } else if ( window.ActiveXObject ) { - - try { - - dom = new ActiveXObject( 'Microsoft.XMLDOM' ); // eslint-disable-line no-undef - dom.async = false; - - if ( ! dom.loadXML( /* xml */ ) ) { - - throw new Error( dom.parseError.reason + dom.parseError.srcText ); - - } - - } catch ( e ) { - - dom = null; - - } - - } else { - - throw new Error( 'Cannot parse xml string!' ); - - } - - // Get the doc - var doc = dom.documentElement; - // Convert to json - var json = xmlToJson( doc ); - var points = []; - var normals = []; - var indices = []; - - if ( json.PolyData ) { - - var piece = json.PolyData.Piece; - var compressed = json.attributes.hasOwnProperty( 'compressor' ); - - // Can be optimized - // Loop through the sections - var sections = [ 'PointData', 'Points', 'Strips', 'Polys' ];// +['CellData', 'Verts', 'Lines']; - var sectionIndex = 0, numberOfSections = sections.length; - - while ( sectionIndex < numberOfSections ) { - - var section = piece[ sections[ sectionIndex ] ]; - - // If it has a DataArray in it - - if ( section && section.DataArray ) { - - // Depending on the number of DataArrays - - if ( Object.prototype.toString.call( section.DataArray ) === '[object Array]' ) { - - var arr = section.DataArray; - - } else { - - var arr = [ section.DataArray ]; - - } - - var dataArrayIndex = 0, numberOfDataArrays = arr.length; - - while ( dataArrayIndex < numberOfDataArrays ) { - - // Parse the DataArray - if ( ( '#text' in arr[ dataArrayIndex ] ) && ( arr[ dataArrayIndex ][ '#text' ].length > 0 ) ) { - - arr[ dataArrayIndex ].text = parseDataArray( arr[ dataArrayIndex ], compressed ); - - } - - dataArrayIndex ++; - - } - - switch ( sections[ sectionIndex ] ) { - - // if iti is point data - case 'PointData': - - var numberOfPoints = parseInt( piece.attributes.NumberOfPoints ); - var normalsName = section.attributes.Normals; - - if ( numberOfPoints > 0 ) { - - for ( var i = 0, len = arr.length; i < len; i ++ ) { - - if ( normalsName === arr[ i ].attributes.Name ) { - - var components = arr[ i ].attributes.NumberOfComponents; - normals = new Float32Array( numberOfPoints * components ); - normals.set( arr[ i ].text, 0 ); - - } - - } - - } - - break; - - // if it is points - case 'Points': - - var numberOfPoints = parseInt( piece.attributes.NumberOfPoints ); - - if ( numberOfPoints > 0 ) { - - var components = section.DataArray.attributes.NumberOfComponents; - points = new Float32Array( numberOfPoints * components ); - points.set( section.DataArray.text, 0 ); - - } - - break; - - // if it is strips - case 'Strips': - - var numberOfStrips = parseInt( piece.attributes.NumberOfStrips ); - - if ( numberOfStrips > 0 ) { - - var connectivity = new Int32Array( section.DataArray[ 0 ].text.length ); - var offset = new Int32Array( section.DataArray[ 1 ].text.length ); - connectivity.set( section.DataArray[ 0 ].text, 0 ); - offset.set( section.DataArray[ 1 ].text, 0 ); - - var size = numberOfStrips + connectivity.length; - indices = new Uint32Array( 3 * size - 9 * numberOfStrips ); - - var indicesIndex = 0; - - for ( var i = 0, len = numberOfStrips; i < len; i ++ ) { - - var strip = []; - - for ( var s = 0, len1 = offset[ i ], len0 = 0; s < len1 - len0; s ++ ) { - - strip.push( connectivity[ s ] ); - - if ( i > 0 ) len0 = offset[ i - 1 ]; - - } - - for ( var j = 0, len1 = offset[ i ], len0 = 0; j < len1 - len0 - 2; j ++ ) { - - if ( j % 2 ) { - - indices[ indicesIndex ++ ] = strip[ j ]; - indices[ indicesIndex ++ ] = strip[ j + 2 ]; - indices[ indicesIndex ++ ] = strip[ j + 1 ]; - - } else { - - indices[ indicesIndex ++ ] = strip[ j ]; - indices[ indicesIndex ++ ] = strip[ j + 1 ]; - indices[ indicesIndex ++ ] = strip[ j + 2 ]; - - } - - if ( i > 0 ) len0 = offset[ i - 1 ]; - - } - - } - - } - - break; - - // if it is polys - case 'Polys': - - var numberOfPolys = parseInt( piece.attributes.NumberOfPolys ); - - if ( numberOfPolys > 0 ) { - - var connectivity = new Int32Array( section.DataArray[ 0 ].text.length ); - var offset = new Int32Array( section.DataArray[ 1 ].text.length ); - connectivity.set( section.DataArray[ 0 ].text, 0 ); - offset.set( section.DataArray[ 1 ].text, 0 ); - - var size = numberOfPolys + connectivity.length; - indices = new Uint32Array( 3 * size - 9 * numberOfPolys ); - var indicesIndex = 0, connectivityIndex = 0; - var i = 0, len = numberOfPolys, len0 = 0; - - while ( i < len ) { - - var poly = []; - var s = 0, len1 = offset[ i ]; - - while ( s < len1 - len0 ) { - - poly.push( connectivity[ connectivityIndex ++ ] ); - s ++; - - } - - var j = 1; - - while ( j < len1 - len0 - 1 ) { - - indices[ indicesIndex ++ ] = poly[ 0 ]; - indices[ indicesIndex ++ ] = poly[ j ]; - indices[ indicesIndex ++ ] = poly[ j + 1 ]; - j ++; - - } - - i ++; - len0 = offset[ i - 1 ]; - - } - - } - - break; - - default: - break; - - } - - } - - sectionIndex ++; - - } - - var geometry = new THREE.BufferGeometry(); - geometry.setIndex( new THREE.BufferAttribute( indices, 1 ) ); - geometry.addAttribute( 'position', new THREE.BufferAttribute( points, 3 ) ); - - if ( normals.length === points.length ) { - - geometry.addAttribute( 'normal', new THREE.BufferAttribute( normals, 3 ) ); - - } - - return geometry; - - } else { - - // TODO for vtu,vti,and other xml formats - - } - - } - - function getStringFile( data ) { - - var stringFile = ''; - var charArray = new Uint8Array( data ); - var i = 0; - var len = charArray.length; - - while ( len -- ) { - - stringFile += String.fromCharCode( charArray[ i ++ ] ); - - } - - return stringFile; - - } - - // get the 5 first lines of the files to check if there is the key word binary - var meta = THREE.LoaderUtils.decodeText( new Uint8Array( data, 0, 250 ) ).split( '\n' ); - - if ( meta[ 0 ].indexOf( 'xml' ) !== - 1 ) { - - return parseXML( getStringFile( data ) ); - - } else if ( meta[ 2 ].includes( 'ASCII' ) ) { - - return parseASCII( getStringFile( data ) ); - - } else { - - return parseBinary( data ); - - } - - } - -} ); diff --git a/spaces/banana-projects/web3d/node_modules/three/src/geometries/TetrahedronGeometry.js b/spaces/banana-projects/web3d/node_modules/three/src/geometries/TetrahedronGeometry.js deleted file mode 100644 index 7f4a7cd0a8484620c6717e676f81c2f0948f6679..0000000000000000000000000000000000000000 --- a/spaces/banana-projects/web3d/node_modules/three/src/geometries/TetrahedronGeometry.js +++ /dev/null @@ -1,57 +0,0 @@ -/** - * @author timothypratley / https://github.com/timothypratley - * @author Mugen87 / https://github.com/Mugen87 - */ - -import { Geometry } from '../core/Geometry.js'; -import { PolyhedronBufferGeometry } from './PolyhedronGeometry.js'; - -// TetrahedronGeometry - -function TetrahedronGeometry( radius, detail ) { - - Geometry.call( this ); - - this.type = 'TetrahedronGeometry'; - - this.parameters = { - radius: radius, - detail: detail - }; - - this.fromBufferGeometry( new TetrahedronBufferGeometry( radius, detail ) ); - this.mergeVertices(); - -} - -TetrahedronGeometry.prototype = Object.create( Geometry.prototype ); -TetrahedronGeometry.prototype.constructor = TetrahedronGeometry; - -// TetrahedronBufferGeometry - -function TetrahedronBufferGeometry( radius, detail ) { - - var vertices = [ - 1, 1, 1, - 1, - 1, 1, - 1, 1, - 1, 1, - 1, - 1 - ]; - - var indices = [ - 2, 1, 0, 0, 3, 2, 1, 3, 0, 2, 3, 1 - ]; - - PolyhedronBufferGeometry.call( this, vertices, indices, radius, detail ); - - this.type = 'TetrahedronBufferGeometry'; - - this.parameters = { - radius: radius, - detail: detail - }; - -} - -TetrahedronBufferGeometry.prototype = Object.create( PolyhedronBufferGeometry.prototype ); -TetrahedronBufferGeometry.prototype.constructor = TetrahedronBufferGeometry; - - -export { TetrahedronGeometry, TetrahedronBufferGeometry }; diff --git a/spaces/banana-projects/web3d/node_modules/three/src/renderers/shaders/ShaderChunk/map_particle_fragment.glsl.js b/spaces/banana-projects/web3d/node_modules/three/src/renderers/shaders/ShaderChunk/map_particle_fragment.glsl.js deleted file mode 100644 index 20dbaab554164b4d72c60dbd9ea0c0566954726e..0000000000000000000000000000000000000000 --- a/spaces/banana-projects/web3d/node_modules/three/src/renderers/shaders/ShaderChunk/map_particle_fragment.glsl.js +++ /dev/null @@ -1,9 +0,0 @@ -export default /* glsl */` -#ifdef USE_MAP - - vec2 uv = ( uvTransform * vec3( gl_PointCoord.x, 1.0 - gl_PointCoord.y, 1 ) ).xy; - vec4 mapTexel = texture2D( map, uv ); - diffuseColor *= mapTexelToLinear( mapTexel ); - -#endif -`; diff --git a/spaces/bioriAsaeru/text-to-voice/3d Vista Virtual Tour Crack Zip !!EXCLUSIVE!!.md b/spaces/bioriAsaeru/text-to-voice/3d Vista Virtual Tour Crack Zip !!EXCLUSIVE!!.md deleted file mode 100644 index 45b21b6aa0b70283be4890d93b89a06778edcfad..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/3d Vista Virtual Tour Crack Zip !!EXCLUSIVE!!.md +++ /dev/null @@ -1,22 +0,0 @@ - -

How to Download and Install 3D Vista Virtual Tour Zip

-

If you are looking for a powerful and easy-to-use software to create stunning virtual tours, you might want to check out 3D Vista Virtual Tour Zip. This software allows you to create interactive 360-degree panoramas, immersive VR tours, floor plans, hotspots, and more. You can also publish your tours online or offline, and share them with your clients or audience.

-

In this article, we will show you how to download and install 3D Vista Virtual Tour Zip on your computer. Follow these simple steps and you will be ready to create amazing virtual tours in no time.

-

3d vista virtual tour crack zip


Downloadhttps://urloso.com/2uyPIz



-

Step 1: Download 3D Vista Virtual Tour Zip

-

The first thing you need to do is to download the software from the official website. You can choose between the Standard or the Pro version, depending on your needs and budget. The Standard version costs $199 and the Pro version costs $499. Both versions offer a free trial for 30 days.

-

To download the software, go to https://www.3dvista.com/en/products/virtualtour and click on the "Download" button. You will be asked to enter your email address and choose your operating system (Windows or Mac). Then, click on the "Download Now" button and save the file on your computer.

-

Step 2: Install 3D Vista Virtual Tour Zip

-

Once you have downloaded the file, you need to unzip it and run the installer. To unzip the file, right-click on it and select "Extract All". Then, choose a destination folder and click on "Extract".

-

To run the installer, double-click on the file named "3DVista_Virtual_Tour_Installer.exe" (for Windows) or "3DVista_Virtual_Tour_Installer.dmg" (for Mac). Follow the instructions on the screen and accept the terms and conditions. The installation process may take a few minutes.

-

Step 3: Activate 3D Vista Virtual Tour Zip

-

After the installation is complete, you need to activate the software with a license key. You can get a license key by purchasing the software or by requesting a free trial.

-

To purchase the software, go to https://www.3dvista.com/en/store and select the version you want. You will be redirected to a secure payment page where you can enter your billing information and complete the transaction. You will receive an email with your license key shortly after.

-

To request a free trial, go to https://www.3dvista.com/en/trial and fill out the form with your name, email address, company name, and phone number. You will receive an email with your license key within 24 hours.

-

To activate the software, open it and click on the "Activate" button. Enter your license key and click on "OK". You will see a confirmation message that your software is activated.

-

-

Step 4: Enjoy 3D Vista Virtual Tour Zip

-

Congratulations! You have successfully downloaded and installed 3D Vista Virtual Tour Zip on your computer. Now you can start creating amazing virtual tours with this software. To learn how to use it, you can check out the tutorials and manuals on the official website or watch some videos on YouTube.

-

We hope this article was helpful for you. If you have any questions or feedback, please feel free to contact us at support@3dvista.com. We would love to hear from you.

d5da3c52bf
-
-
\ No newline at end of file diff --git a/spaces/bioriAsaeru/text-to-voice/Cutting Optimization Pro 5.9.9 Key Generator 37.md b/spaces/bioriAsaeru/text-to-voice/Cutting Optimization Pro 5.9.9 Key Generator 37.md deleted file mode 100644 index 87445d10eb6ea33eb54155fe988a2736411bc56a..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/Cutting Optimization Pro 5.9.9 Key Generator 37.md +++ /dev/null @@ -1,10 +0,0 @@ -

Cutting Optimization Pro 5.9.9 Key Generator 37


Download Ziphttps://urloso.com/2uyRLK



- -Jun 15, 2021 - Hardware/software evaluation and development tools. Appendix A. MC1322x Register Address Map - Provides a single table memory map diagram. Appendix B. MC1322x Register Address Map - Provides description, general information, and general addresses. -Appendix B. MC1322x Programming Language - Provides a brief overview of the MC1322x language. -Appendix D. MC1322x Sample Programs - Provides sample programs for the MC1322x. -Appendix E. MC1322x Description - Provides complete documentation for the MC1322x. -Appendix F. MC1322x Description - Provides a complete list of documentation and reference material for the MC1322x. 8a78ff9644
-
-
-

diff --git a/spaces/bioriAsaeru/text-to-voice/Filem Rock 2005 Full Movie Free 168.md b/spaces/bioriAsaeru/text-to-voice/Filem Rock 2005 Full Movie Free 168.md deleted file mode 100644 index 1692c6f4ee96718e35976eb6c6d59032f48e1395..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/Filem Rock 2005 Full Movie Free 168.md +++ /dev/null @@ -1,6 +0,0 @@ -

Filem Rock 2005 Full Movie Free 168


Download Zip →→→ https://urloso.com/2uyOcg



-
-/home/hugh/Downloads/seacomplete-0.0.4.tar.gz 4fefd39f24
-
-
-

diff --git a/spaces/bioriAsaeru/text-to-voice/Jimmy Neutron Full VERIFIED Episodes Tagalog Version Of The Holy Rosary.md b/spaces/bioriAsaeru/text-to-voice/Jimmy Neutron Full VERIFIED Episodes Tagalog Version Of The Holy Rosary.md deleted file mode 100644 index 23d5c61497fa1b83dcffb159ab4eff6304432292..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/Jimmy Neutron Full VERIFIED Episodes Tagalog Version Of The Holy Rosary.md +++ /dev/null @@ -1,10 +0,0 @@ -
-

This airdate for the Jimmy Neutron episode, "A Flying Jimmy Neutron", is not as complete as the other episodes.. This is a list of programs previously broadcast by TV5. For the currently aired shows of the. . Tags: Jimmy Neutron. Jimmy Neutron full episodes tagalog version of the holy rosary has an iPhone version too: or download it here :. . . . Jimmy Neutron full episodes. . . Jimmy Neutron full episodes. . . . . . . . .

-

jimmy neutron full episodes tagalog version of the holy rosary


Downloadhttps://urloso.com/2uyRgc



-

Star Hooper has a long, pale, high cheekbone, a clean-shaven, somewhat bold face, dark eyes, skin of a nice tone, medium full-bodied hair, and a. TodorokiTodoCaboCaboBloody GorgeousGorgeous GanjiroGanjiro. . Jimmy Neutron: Boy Genius (20062007); Atlantis High (20062010); The PJs: Welcome Home from the Holidays (20122013. He has a anagram for the name of the program: Clamp.. Jimmie (20132014).

-

Minnie Snagglepuss has a long, pale, high cheekbone, a clean-shaven, rather bold face, medium full-bodied hair, medium. in their imagination, but not in real life. Hover, you view all the variations of the word. Matson GreeniCakesJohny's Jimmy's.

-

Most of the time. Jimmy Neutron: Boy Genius (20062007); The PJs: Welcome Home from the Holidays (20122013. This is a list of programs previously broadcast by TV5. Star Hooper has a long, pale, high cheekbone, a clean-shaven, somewhat bold face, dark eyes, skin of a nice tone, medium full-bodied hair, and a medium-brown hair.

-

Nick is the following: a husband, a cat dad, a Libra, a Bowler, a black and white cat, a philosopher, a. jimmy neutron full episodes tagalog version of the holy rosary. The series revolves around the adventures of a boy named Jimmy Neutron. Jimmy Neutron (20062007); The PJs: Welcome Home from the Holidays (20122013.

-

899543212b
-
-
\ No newline at end of file diff --git a/spaces/brjathu/HMR2.0/vendor/detectron2/tests/test_model_analysis.py b/spaces/brjathu/HMR2.0/vendor/detectron2/tests/test_model_analysis.py deleted file mode 100644 index c01b7af09703c8dad889dee0118d74fcc12ac4b0..0000000000000000000000000000000000000000 --- a/spaces/brjathu/HMR2.0/vendor/detectron2/tests/test_model_analysis.py +++ /dev/null @@ -1,80 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. - - -import unittest -import torch -from torch import nn - -from detectron2.utils.analysis import find_unused_parameters, flop_count_operators, parameter_count -from detectron2.utils.testing import get_model_no_weights - - -class RetinaNetTest(unittest.TestCase): - def setUp(self): - self.model = get_model_no_weights("COCO-Detection/retinanet_R_50_FPN_1x.yaml") - - def test_flop(self): - # RetinaNet supports flop-counting with random inputs - inputs = [{"image": torch.rand(3, 800, 800), "test_unused": "abcd"}] - res = flop_count_operators(self.model, inputs) - self.assertEqual(int(res["conv"]), 146) # 146B flops - - def test_param_count(self): - res = parameter_count(self.model) - self.assertEqual(res[""], 37915572) - self.assertEqual(res["backbone"], 31452352) - - -class FasterRCNNTest(unittest.TestCase): - def setUp(self): - self.model = get_model_no_weights("COCO-Detection/faster_rcnn_R_50_FPN_1x.yaml") - - def test_flop(self): - # Faster R-CNN supports flop-counting with random inputs - inputs = [{"image": torch.rand(3, 800, 800)}] - res = flop_count_operators(self.model, inputs) - - # This only checks flops for backbone & proposal generator - # Flops for box head is not conv, and depends on #proposals, which is - # almost 0 for random inputs. - self.assertEqual(int(res["conv"]), 117) - - def test_flop_with_output_shape(self): - inputs = [{"image": torch.rand(3, 800, 800), "height": 700, "width": 700}] - res = flop_count_operators(self.model, inputs) - self.assertEqual(int(res["conv"]), 117) - - def test_param_count(self): - res = parameter_count(self.model) - self.assertEqual(res[""], 41699936) - self.assertEqual(res["backbone"], 26799296) - - -class MaskRCNNTest(unittest.TestCase): - def setUp(self): - self.model = get_model_no_weights("COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_1x.yaml") - - def test_flop(self): - inputs1 = [{"image": torch.rand(3, 800, 800)}] - inputs2 = [{"image": torch.rand(3, 800, 800), "height": 700, "width": 700}] - - for inputs in [inputs1, inputs2]: - res = flop_count_operators(self.model, inputs) - # The mask head could have extra conv flops, so total >= 117 - self.assertGreaterEqual(int(res["conv"]), 117) - - -class UnusedParamTest(unittest.TestCase): - def test_unused(self): - class TestMod(nn.Module): - def __init__(self): - super().__init__() - self.fc1 = nn.Linear(10, 10) - self.t = nn.Linear(10, 10) - - def forward(self, x): - return self.fc1(x).mean() - - m = TestMod() - ret = find_unused_parameters(m, torch.randn(10, 10)) - self.assertEqual(set(ret), {"t.weight", "t.bias"}) diff --git a/spaces/bunkalab/bunka-map/app.py b/spaces/bunkalab/bunka-map/app.py deleted file mode 100644 index 42010df4945027cbf21318e8c950f1624d25bc6b..0000000000000000000000000000000000000000 --- a/spaces/bunkalab/bunka-map/app.py +++ /dev/null @@ -1,54 +0,0 @@ -import streamlit as st - -st.sidebar.image("images/logo.png", use_column_width=True) -st.sidebar.write("Bunka Summarizes & Visualizes Information as Maps using LLMs.") - -st.sidebar.title("Github Page") -st.sidebar.write( - "Have a look at the following package on GitHub: https://github.com/charlesdedampierre/BunkaTopics" -) -st.sidebar.title("Dataset") -st.sidebar.write("HH-RLHF Dataset: https://huggingface.co/datasets/Anthropic/hh-rlhf") - -st.title("How to understand large textual datasets?") - -import pandas as pd - -df = pd.read_csv("data/rejection-sampling.csv", index_col=[0]) -st.dataframe(df, use_container_width=True) - -st.title("Bunka Exploration Engine") - -st.image("images/pipeline.png", use_column_width=True) - - -# Path to the HTML file containing the Plotly figure -bunka_map_path = "maps/bunka_map.html" # Replace with your HTML file path - -# Use the 'st.components' function to embed the HTML content -with open(bunka_map_path, "r") as f: - bunka_map_html = f.read() - -st.components.v1.html(bunka_map_html, width=800, height=800) - -st.title("Framing Analysis") - -# Path to the HTML file containing the Plotly figure -bunka_map_path = ( - "maps/bourdieu_priacy_politics.html" # Replace with your HTML file path -) - -# Use the 'st.components' function to embed the HTML content -with open(bunka_map_path, "r") as f: - bunka_map_html = f.read() - -st.components.v1.html(bunka_map_html, width=800, height=800) - -# Path to the HTML file containing the Plotly figure -bunka_map_path = "maps/violence_men_women.html" # Replace with your HTML file path - -# Use the 'st.components' function to embed the HTML content -with open(bunka_map_path, "r") as f: - bunka_map_html = f.read() - -st.components.v1.html(bunka_map_html, width=800, height=800) diff --git a/spaces/cadige/02-Gradio-Art-From-Text-and-Images/app.py b/spaces/cadige/02-Gradio-Art-From-Text-and-Images/app.py deleted file mode 100644 index 10939427025b17176765402185cd11e23caa1523..0000000000000000000000000000000000000000 --- a/spaces/cadige/02-Gradio-Art-From-Text-and-Images/app.py +++ /dev/null @@ -1,224 +0,0 @@ -import os - -os.system("git clone --recursive https://github.com/JD-P/cloob-latent-diffusion") -os.system("cd cloob-latent-diffusion;pip install omegaconf pillow pytorch-lightning einops wandb ftfy regex ./CLIP") - -import argparse -from functools import partial -from pathlib import Path -import sys -sys.path.append('./cloob-latent-diffusion') -sys.path.append('./cloob-latent-diffusion/cloob-training') -sys.path.append('./cloob-latent-diffusion/latent-diffusion') -sys.path.append('./cloob-latent-diffusion/taming-transformers') -sys.path.append('./cloob-latent-diffusion/v-diffusion-pytorch') -from omegaconf import OmegaConf -from PIL import Image -import torch -from torch import nn -from torch.nn import functional as F -from torchvision import transforms -from torchvision.transforms import functional as TF -from tqdm import trange -from CLIP import clip -from cloob_training import model_pt, pretrained -import ldm.models.autoencoder -from diffusion import sampling, utils -import train_latent_diffusion as train -from huggingface_hub import hf_hub_url, cached_download -import random - -# Download the model files -checkpoint = cached_download(hf_hub_url("huggan/distill-ccld-wa", filename="model_student.ckpt")) -ae_model_path = cached_download(hf_hub_url("huggan/ccld_wa", filename="ae_model.ckpt")) -ae_config_path = cached_download(hf_hub_url("huggan/ccld_wa", filename="ae_model.yaml")) - -# Define a few utility functions - - -def parse_prompt(prompt, default_weight=3.): - if prompt.startswith('http://') or prompt.startswith('https://'): - vals = prompt.rsplit(':', 2) - vals = [vals[0] + ':' + vals[1], *vals[2:]] - else: - vals = prompt.rsplit(':', 1) - vals = vals + ['', default_weight][len(vals):] - return vals[0], float(vals[1]) - - -def resize_and_center_crop(image, size): - fac = max(size[0] / image.size[0], size[1] / image.size[1]) - image = image.resize((int(fac * image.size[0]), int(fac * image.size[1])), Image.LANCZOS) - return TF.center_crop(image, size[::-1]) - - -# Load the models -device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu') -print('Using device:', device) -print('loading models') - -# autoencoder -ae_config = OmegaConf.load(ae_config_path) -ae_model = ldm.models.autoencoder.AutoencoderKL(**ae_config.model.params) -ae_model.eval().requires_grad_(False).to(device) -ae_model.load_state_dict(torch.load(ae_model_path)) -n_ch, side_y, side_x = 4, 32, 32 - -# diffusion model -model = train.DiffusionModel(192, [1,1,2,2], autoencoder_scale=torch.tensor(4.3084)) -model.load_state_dict(torch.load(checkpoint, map_location='cpu')) -model = model.to(device).eval().requires_grad_(False) - -# CLOOB -cloob_config = pretrained.get_config('cloob_laion_400m_vit_b_16_16_epochs') -cloob = model_pt.get_pt_model(cloob_config) -checkpoint = pretrained.download_checkpoint(cloob_config) -cloob.load_state_dict(model_pt.get_pt_params(cloob_config, checkpoint)) -cloob.eval().requires_grad_(False).to(device) - - -# The key function: returns a list of n PIL images -def generate(n=1, prompts=['a red circle'], images=[], seed=42, steps=15, - method='plms', eta=None): - zero_embed = torch.zeros([1, cloob.config['d_embed']], device=device) - target_embeds, weights = [zero_embed], [] - - for prompt in prompts: - txt, weight = parse_prompt(prompt) - target_embeds.append(cloob.text_encoder(cloob.tokenize(txt).to(device)).float()) - weights.append(weight) - - for prompt in images: - path, weight = parse_prompt(prompt) - img = Image.open(utils.fetch(path)).convert('RGB') - clip_size = cloob.config['image_encoder']['image_size'] - img = resize_and_center_crop(img, (clip_size, clip_size)) - batch = TF.to_tensor(img)[None].to(device) - embed = F.normalize(cloob.image_encoder(cloob.normalize(batch)).float(), dim=-1) - target_embeds.append(embed) - weights.append(weight) - - weights = torch.tensor([1 - sum(weights), *weights], device=device) - - torch.manual_seed(seed) - - def cfg_model_fn(x, t): - n = x.shape[0] - n_conds = len(target_embeds) - x_in = x.repeat([n_conds, 1, 1, 1]) - t_in = t.repeat([n_conds]) - clip_embed_in = torch.cat([*target_embeds]).repeat_interleave(n, 0) - vs = model(x_in, t_in, clip_embed_in).view([n_conds, n, *x.shape[1:]]) - v = vs.mul(weights[:, None, None, None, None]).sum(0) - return v - - def run(x, steps): - if method == 'ddpm': - return sampling.sample(cfg_model_fn, x, steps, 1., {}) - if method == 'ddim': - return sampling.sample(cfg_model_fn, x, steps, eta, {}) - if method == 'prk': - return sampling.prk_sample(cfg_model_fn, x, steps, {}) - if method == 'plms': - return sampling.plms_sample(cfg_model_fn, x, steps, {}) - if method == 'pie': - return sampling.pie_sample(cfg_model_fn, x, steps, {}) - if method == 'plms2': - return sampling.plms2_sample(cfg_model_fn, x, steps, {}) - assert False - - batch_size = n - x = torch.randn([n, n_ch, side_y, side_x], device=device) - t = torch.linspace(1, 0, steps + 1, device=device)[:-1] - steps = utils.get_spliced_ddpm_cosine_schedule(t) - pil_ims = [] - for i in trange(0, n, batch_size): - cur_batch_size = min(n - i, batch_size) - out_latents = run(x[i:i+cur_batch_size], steps) - outs = ae_model.decode(out_latents * torch.tensor(2.55).to(device)) - for j, out in enumerate(outs): - pil_ims.append(utils.to_pil_image(out)) - - return pil_ims - - -import gradio as gr - -def gen_ims(prompt, im_prompt=None, seed=None, n_steps=10, method='plms'): - if seed == None : - seed = random.randint(0, 10000) - print( prompt, im_prompt, seed, n_steps) - prompts = [prompt] - im_prompts = [] - if im_prompt != None: - im_prompts = [im_prompt] - pil_ims = generate(n=1, prompts=prompts, images=im_prompts, seed=seed, steps=n_steps, method=method) - return pil_ims[0] - -iface = gr.Interface(fn=gen_ims, - inputs=[#gr.inputs.Slider(minimum=1, maximum=1, step=1, default=1,label="Number of images"), - #gr.inputs.Slider(minimum=0, maximum=200, step=1, label='Random seed', default=0), - gr.inputs.Textbox(label="Text prompt"), - gr.inputs.Image(optional=True, label="Image prompt", type='filepath'), - #gr.inputs.Slider(minimum=10, maximum=35, step=1, default=15,label="Number of steps") - ], - outputs=[gr.outputs.Image(type="pil", label="Generated Image")], - examples=[ - ["Futurism, in the style of Wassily Kandinsky"], - ["Art Nouveau, in the style of John Singer Sargent"], - ["Surrealism, in the style of Edgar Degas"], - ["Expressionism, in the style of Wassily Kandinsky"], - ["Futurism, in the style of Egon Schiele"], - ["Neoclassicism, in the style of Gustav Klimt"], - ["Cubism, in the style of Gustav Klimt"], - ["Op Art, in the style of Marc Chagall"], - ["Romanticism, in the style of M.C. Escher"], - ["Futurism, in the style of M.C. Escher"], - ["Abstract Art, in the style of M.C. Escher"], - ["Mannerism, in the style of Paul Klee"], - ["Romanesque Art, in the style of Leonardo da Vinci"], - ["High Renaissance, in the style of Rembrandt"], - ["Magic Realism, in the style of Gustave Dore"], - ["Realism, in the style of Jean-Michel Basquiat"], - ["Art Nouveau, in the style of Paul Gauguin"], - ["Avant-garde, in the style of Pierre-Auguste Renoir"], - ["Baroque, in the style of Edward Hopper"], - ["Post-Impressionism, in the style of Wassily Kandinsky"], - ["Naturalism, in the style of Rene Magritte"], - ["Constructivism, in the style of Paul Cezanne"], - ["Abstract Expressionism, in the style of Henri Matisse"], - ["Pop Art, in the style of Vincent van Gogh"], - ["Futurism, in the style of Wassily Kandinsky"], - ["Futurism, in the style of Zdzislaw Beksinski"], - ['Surrealism, in the style of Salvador Dali'], - ["Aaron Wacker, oil on canvas"], - ["abstract"], - ["landscape"], - ["portrait"], - ["sculpture"], - ["genre painting"], - ["installation"], - ["photo"], - ["figurative"], - ["illustration"], - ["still life"], - ["history painting"], - ["cityscape"], - ["marina"], - ["animal painting"], - ["design"], - ["calligraphy"], - ["symbolic painting"], - ["graffiti"], - ["performance"], - ["mythological painting"], - ["battle painting"], - ["self-portrait"], - ["Impressionism, oil on canvas"] - ], - title='Art Generator and Style Mixer from 🧠 Cloob and 🎨 WikiArt - Visual Art Encyclopedia:', - description="Trained on images from the [WikiArt](https://www.wikiart.org/) dataset, comprised of visual arts", - article = 'Model used is: [model card](https://huggingface.co/huggan/distill-ccld-wa)..' - -) -iface.launch(enable_queue=True) # , debug=True for colab debugging \ No newline at end of file diff --git a/spaces/candlend/vits-hoshimi/sovits/vdecoder/parallel_wavegan/models/__init__.py b/spaces/candlend/vits-hoshimi/sovits/vdecoder/parallel_wavegan/models/__init__.py deleted file mode 100644 index 4803ba6b2a0afc8022e756ae5b3f4c7403c3c1bd..0000000000000000000000000000000000000000 --- a/spaces/candlend/vits-hoshimi/sovits/vdecoder/parallel_wavegan/models/__init__.py +++ /dev/null @@ -1,2 +0,0 @@ -from .melgan import * # NOQA -from .parallel_wavegan import * # NOQA diff --git a/spaces/cfj108/CompVis-stable-diffusion-v1-4/app.py b/spaces/cfj108/CompVis-stable-diffusion-v1-4/app.py deleted file mode 100644 index b60a087620a806fea130bedcd6940bef75fa3337..0000000000000000000000000000000000000000 --- a/spaces/cfj108/CompVis-stable-diffusion-v1-4/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/CompVis/stable-diffusion-v1-4").launch() diff --git a/spaces/chendl/compositional_test/transformers/examples/pytorch/question-answering/trainer_qa.py b/spaces/chendl/compositional_test/transformers/examples/pytorch/question-answering/trainer_qa.py deleted file mode 100644 index a486405b62877ee83d1a60f3fdf7a8f326882fcc..0000000000000000000000000000000000000000 --- a/spaces/chendl/compositional_test/transformers/examples/pytorch/question-answering/trainer_qa.py +++ /dev/null @@ -1,136 +0,0 @@ -# coding=utf-8 -# Copyright 2020 The HuggingFace Team All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -""" -A subclass of `Trainer` specific to Question-Answering tasks -""" -import math -import time - -from transformers import Trainer, is_torch_tpu_available -from transformers.trainer_utils import PredictionOutput, speed_metrics - - -if is_torch_tpu_available(check_device=False): - import torch_xla.core.xla_model as xm - import torch_xla.debug.metrics as met - - -class QuestionAnsweringTrainer(Trainer): - def __init__(self, *args, eval_examples=None, post_process_function=None, **kwargs): - super().__init__(*args, **kwargs) - self.eval_examples = eval_examples - self.post_process_function = post_process_function - - def evaluate(self, eval_dataset=None, eval_examples=None, ignore_keys=None, metric_key_prefix: str = "eval"): - eval_dataset = self.eval_dataset if eval_dataset is None else eval_dataset - eval_dataloader = self.get_eval_dataloader(eval_dataset) - eval_examples = self.eval_examples if eval_examples is None else eval_examples - - # Temporarily disable metric computation, we will do it in the loop here. - compute_metrics = self.compute_metrics - self.compute_metrics = None - eval_loop = self.prediction_loop if self.args.use_legacy_prediction_loop else self.evaluation_loop - start_time = time.time() - try: - output = eval_loop( - eval_dataloader, - description="Evaluation", - # No point gathering the predictions if there are no metrics, otherwise we defer to - # self.args.prediction_loss_only - prediction_loss_only=True if compute_metrics is None else None, - ignore_keys=ignore_keys, - metric_key_prefix=metric_key_prefix, - ) - finally: - self.compute_metrics = compute_metrics - total_batch_size = self.args.eval_batch_size * self.args.world_size - if f"{metric_key_prefix}_jit_compilation_time" in output.metrics: - start_time += output.metrics[f"{metric_key_prefix}_jit_compilation_time"] - output.metrics.update( - speed_metrics( - metric_key_prefix, - start_time, - num_samples=output.num_samples, - num_steps=math.ceil(output.num_samples / total_batch_size), - ) - ) - if self.post_process_function is not None and self.compute_metrics is not None and self.args.should_save: - # Only the main node write the results by default - eval_preds = self.post_process_function(eval_examples, eval_dataset, output.predictions) - metrics = self.compute_metrics(eval_preds) - - # Prefix all keys with metric_key_prefix + '_' - for key in list(metrics.keys()): - if not key.startswith(f"{metric_key_prefix}_"): - metrics[f"{metric_key_prefix}_{key}"] = metrics.pop(key) - metrics.update(output.metrics) - else: - metrics = output.metrics - - if self.args.should_log: - # Only the main node log the results by default - self.log(metrics) - - if self.args.tpu_metrics_debug or self.args.debug: - # tpu-comment: Logging debug metrics for PyTorch/XLA (compile, execute times, ops, etc.) - xm.master_print(met.metrics_report()) - - self.control = self.callback_handler.on_evaluate(self.args, self.state, self.control, metrics) - return metrics - - def predict(self, predict_dataset, predict_examples, ignore_keys=None, metric_key_prefix: str = "test"): - predict_dataloader = self.get_test_dataloader(predict_dataset) - - # Temporarily disable metric computation, we will do it in the loop here. - compute_metrics = self.compute_metrics - self.compute_metrics = None - eval_loop = self.prediction_loop if self.args.use_legacy_prediction_loop else self.evaluation_loop - start_time = time.time() - try: - output = eval_loop( - predict_dataloader, - description="Prediction", - # No point gathering the predictions if there are no metrics, otherwise we defer to - # self.args.prediction_loss_only - prediction_loss_only=True if compute_metrics is None else None, - ignore_keys=ignore_keys, - metric_key_prefix=metric_key_prefix, - ) - finally: - self.compute_metrics = compute_metrics - total_batch_size = self.args.eval_batch_size * self.args.world_size - if f"{metric_key_prefix}_jit_compilation_time" in output.metrics: - start_time += output.metrics[f"{metric_key_prefix}_jit_compilation_time"] - output.metrics.update( - speed_metrics( - metric_key_prefix, - start_time, - num_samples=output.num_samples, - num_steps=math.ceil(output.num_samples / total_batch_size), - ) - ) - - if self.post_process_function is None or self.compute_metrics is None: - return output - - predictions = self.post_process_function(predict_examples, predict_dataset, output.predictions, "predict") - metrics = self.compute_metrics(predictions) - - # Prefix all keys with metric_key_prefix + '_' - for key in list(metrics.keys()): - if not key.startswith(f"{metric_key_prefix}_"): - metrics[f"{metric_key_prefix}_{key}"] = metrics.pop(key) - metrics.update(output.metrics) - return PredictionOutput(predictions=predictions.predictions, label_ids=predictions.label_ids, metrics=metrics) diff --git a/spaces/chendl/compositional_test/transformers/examples/research_projects/xtreme-s/run_xtreme_s.py b/spaces/chendl/compositional_test/transformers/examples/research_projects/xtreme-s/run_xtreme_s.py deleted file mode 100644 index 6c5b4bde892da18b57335ef779568af0728631c6..0000000000000000000000000000000000000000 --- a/spaces/chendl/compositional_test/transformers/examples/research_projects/xtreme-s/run_xtreme_s.py +++ /dev/null @@ -1,949 +0,0 @@ -#!/usr/bin/env python -# coding=utf-8 -# Copyright 2022 The HuggingFace Inc. team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and - -""" Fine-tuning a 🤗 Transformers pretrained speech model on the XTREME-S benchmark tasks""" - -import json -import logging -import os -import re -import sys -from collections import OrderedDict, defaultdict -from dataclasses import dataclass, field -from typing import Dict, List, Optional, Union - -import datasets -import numpy as np -import torch -from datasets import DatasetDict, load_dataset, load_metric - -import transformers -from transformers import ( - AutoConfig, - AutoFeatureExtractor, - AutoModelForAudioClassification, - AutoModelForCTC, - AutoModelForSpeechSeq2Seq, - AutoProcessor, - AutoTokenizer, - HfArgumentParser, - Seq2SeqTrainer, - Seq2SeqTrainingArguments, - Trainer, - set_seed, -) -from transformers.trainer_utils import get_last_checkpoint, is_main_process -from transformers.utils import check_min_version -from transformers.utils.versions import require_version - - -# Will error if the minimal version of Transformers is not installed. Remove at your own risks. -check_min_version("4.18.0.dev0") - -require_version("datasets>=1.18.0", "To fix: pip install -r examples/pytorch/speech-recognition/requirements.txt") - - -logger = logging.getLogger(__name__) - - -def list_field(default=None, metadata=None): - return field(default_factory=lambda: default, metadata=metadata) - - -TASK_TO_TARGET_COLUMN_NAME = { - "fleurs-asr": "transcription", - "fleurs-lang_id": "lang_id", - "mls": "transcription", - "voxpopuli": "transcription", - "covost2": "translation", - "minds14": "intent_class", - "babel": "transcription", -} - - -@dataclass -class ModelArguments: - """ - Arguments pertaining to which model/config/tokenizer we are going to fine-tune from. - """ - - model_name_or_path: str = field( - metadata={"help": "Path to pretrained model or model identifier from huggingface.co/models"} - ) - tokenizer_name_or_path: Optional[str] = field( - default=None, - metadata={"help": "Path to pretrained tokenizer or tokenizer identifier from huggingface.co/models"}, - ) - cache_dir: Optional[str] = field( - default=None, - metadata={ - "help": "Where do you want to store the pretrained models and datasets downloaded from huggingface.co" - }, - ) - freeze_feature_encoder: bool = field( - default=True, metadata={"help": "Whether to freeze the feature encoder layers of the model."} - ) - attention_dropout: float = field( - default=0.0, metadata={"help": "The dropout ratio for the attention probabilities."} - ) - activation_dropout: float = field( - default=0.0, metadata={"help": "The dropout ratio for activations inside the fully connected layer."} - ) - feat_proj_dropout: float = field(default=0.0, metadata={"help": "The dropout ratio for the projected features."}) - hidden_dropout: float = field( - default=0.0, - metadata={ - "help": "The dropout probability for all fully connected layers in the embeddings, encoder, and pooler." - }, - ) - final_dropout: float = field( - default=0.0, - metadata={"help": "The dropout probability for the final projection layer."}, - ) - mask_time_prob: float = field( - default=0.05, - metadata={ - "help": ( - "Probability of each feature vector along the time axis to be chosen as the start of the vector" - "span to be masked. Approximately ``mask_time_prob * sequence_length // mask_time_length`` feature" - "vectors will be masked along the time axis." - ) - }, - ) - mask_time_length: int = field( - default=10, - metadata={"help": "Length of vector span to mask along the time axis."}, - ) - mask_feature_prob: float = field( - default=0.0, - metadata={ - "help": ( - "Probability of each feature vector along the feature axis to be chosen as the start of the vectorspan" - " to be masked. Approximately ``mask_feature_prob * sequence_length // mask_feature_length`` feature" - " bins will be masked along the time axis." - ) - }, - ) - mask_feature_length: int = field( - default=10, - metadata={"help": "Length of vector span to mask along the feature axis."}, - ) - layerdrop: float = field(default=0.0, metadata={"help": "The LayerDrop probability."}) - ctc_zero_infinity: bool = field( - default=False, - metadata={"help": "Whether to zero infinite losses and the associated gradients of `torch.nn.CTCLoss`."}, - ) - ctc_loss_reduction: Optional[str] = field( - default="mean", metadata={"help": "The way the ctc loss should be reduced. Should be one of 'mean' or 'sum'."} - ) - - -@dataclass -class DataTrainingArguments: - """ - Arguments pertaining to what data we are going to input our model for training and eval. - - Using `HfArgumentParser` we can turn this class - into argparse arguments to be able to specify them on - the command line. - """ - - dataset_name: str = field( - default="google/xtreme_s", - metadata={"help": "The name of the dataset to use (via the datasets library). Defaults to 'google/xtreme_s'"}, - ) - task: str = field( - default=None, - metadata={ - "help": ( - "The task name of the benchmark to use (via the datasets library). Should be on of: " - "'fleurs-asr', 'mls', 'voxpopuli', 'covost2', 'minds14', 'fleurs-lang_id', 'babel'." - ) - }, - ) - language: str = field( - default="all", - metadata={"help": "The language id as defined in the datasets config name or `all` for all languages."}, - ) - language_group: str = field( - default=None, - metadata={ - "help": ( - "The language group to select a subset of languages to train on. " - "This option is only used the 'fleurs-asr' task. Should be one of: " - "'western_european_we', 'eastern_european_ee', 'central_asia_middle_north_african_cmn', " - "'sub_saharan_african_ssa', 'south_asian_sa', 'south_east_asian_sea', 'chinese_japanase_korean_cjk'." - ) - }, - ) - train_split_name: str = field( - default="train", - metadata={ - "help": "The name of the training dataset split to use (via the datasets library). Defaults to 'train'" - }, - ) - eval_split_name: str = field( - default="validation", - metadata={ - "help": ( - "The name of the evaluation dataset split to use (via the datasets library). Defaults to 'validation'" - ) - }, - ) - predict_split_name: str = field( - default="test", - metadata={ - "help": "The name of the prediction dataset split to use (via the datasets library). Defaults to 'test'" - }, - ) - audio_column_name: str = field( - default="audio", - metadata={"help": "The name of the dataset column containing the audio data. Defaults to 'audio'"}, - ) - target_column_name: str = field( - default=None, - metadata={ - "help": ( - "The name of the dataset column containing the target data (transcription/translation/label). If None," - " the name will be inferred from the task. Defaults to None." - ) - }, - ) - overwrite_cache: bool = field( - default=False, metadata={"help": "Overwrite the cached preprocessed datasets or not."} - ) - preprocessing_num_workers: Optional[int] = field( - default=None, - metadata={"help": "The number of processes to use for the preprocessing."}, - ) - max_train_samples: Optional[int] = field( - default=None, - metadata={ - "help": ( - "For debugging purposes or quicker training, truncate the number of training examples to this " - "value if set." - ) - }, - ) - max_eval_samples: Optional[int] = field( - default=None, - metadata={ - "help": ( - "For debugging purposes or quicker training, truncate the number of validation examples to this " - "value if set." - ) - }, - ) - max_predict_samples: Optional[int] = field( - default=None, - metadata={ - "help": ( - "For debugging purposes or quicker training, truncate the number of prediction examples to this " - "value if set." - ) - }, - ) - chars_to_ignore: Optional[List[str]] = list_field( - default=', ? . ! - ; : " “ % ‘ ” �'.split(" "), - metadata={"help": "A list of characters to remove from the transcripts."}, - ) - max_duration_in_seconds: float = field( - default=30.0, - metadata={ - "help": ( - "Filter audio files that are longer than `max_duration_in_seconds` seconds to" - " 'max_duration_in_seconds`" - ) - }, - ) - min_duration_in_seconds: float = field( - default=0.0, metadata={"help": "Filter audio files that are shorter than `min_duration_in_seconds` seconds"} - ) - preprocessing_only: bool = field( - default=False, - metadata={ - "help": ( - "Whether to only do data preprocessing and skip training. This is especially useful when data" - " preprocessing errors out in distributed training due to timeout. In this case, one should run the" - " preprocessing in a non-distributed setup with `preprocessing_only=True` so that the cached datasets" - " can consequently be loaded in distributed training" - ) - }, - ) - use_auth_token: bool = field( - default=False, - metadata={ - "help": ( - "If :obj:`True`, will use the token generated when running" - ":obj:`huggingface-cli login` as HTTP bearer authorization for remote files." - ) - }, - ) - unk_token: str = field( - default="[UNK]", - metadata={"help": "The unk token for the tokenizer"}, - ) - pad_token: str = field( - default="[PAD]", - metadata={"help": "The padding token for the tokenizer"}, - ) - word_delimiter_token: str = field( - default="|", - metadata={"help": "The word delimiter token for the tokenizer"}, - ) - phoneme_language: Optional[str] = field( - default=None, - metadata={ - "help": ( - "The target language that should be used be" - " passed to the tokenizer for tokenization. Note that" - " this is only relevant if the model classifies the" - " input audio to a sequence of phoneme sequences." - ) - }, - ) - per_lang_metrics: bool = field( - default=True, - metadata={ - "help": ( - "If `True`, compute the test metrics separately for each language, and average the results. " - "If `False` compute the average test metrics in a single pass for all languages at once." - ) - }, - ) - - -@dataclass -class SpeechDataCollatorWithPadding: - processor: AutoProcessor - decoder_start_token_id: Optional[int] = None - padding: Union[bool, str] = "longest" - pad_labels: Optional[int] = True - pad_to_multiple_of: Optional[int] = None - pad_to_multiple_of_labels: Optional[int] = None - - def __call__(self, features: List[Dict[str, Union[List[int], torch.Tensor]]]) -> Dict[str, torch.Tensor]: - # split inputs and labels since they have to be of different lenghts and need - # different padding methods - input_features = [{"input_values": feature["input_values"]} for feature in features] - - batch = self.processor.pad( - input_features, - padding=self.padding, - pad_to_multiple_of=self.pad_to_multiple_of, - return_tensors="pt", - ) - - if self.pad_labels: - label_features = [{"input_ids": feature["labels"]} for feature in features] - labels_batch = self.processor.pad( - labels=label_features, - padding=self.padding, - pad_to_multiple_of=self.pad_to_multiple_of_labels, - return_tensors="pt", - ) - - # replace padding with -100 to ignore loss correctly - labels = labels_batch["input_ids"].masked_fill(labels_batch.attention_mask.ne(1), -100) - - # if bos token is appended in previous tokenization step, - # cut bos token here as it's append later anyways - if ( - self.decoder_start_token_id is not None - and (labels[:, 0] == self.decoder_start_token_id).all().cpu().item() - ): - labels = labels[:, 1:] - - batch["labels"] = labels - else: - batch["labels"] = torch.tensor([feature["labels"] for feature in features]) - - return batch - - -def create_vocabulary_from_data( - datasets: DatasetDict, - word_delimiter_token: Optional[str] = None, - unk_token: Optional[str] = None, - pad_token: Optional[str] = None, -): - # Given training and test labels create vocabulary - def extract_all_chars(batch): - all_text = " ".join(batch["target_text"]) - vocab = list(set(all_text)) - return {"vocab": [vocab], "all_text": [all_text]} - - vocabs = datasets.map( - extract_all_chars, - batched=True, - batch_size=-1, - keep_in_memory=True, - remove_columns=datasets["train"].column_names, - ) - - # take union of all unique characters in each dataset - vocab_set = ( - (set(vocabs["train"]["vocab"][0]) if "train" in vocabs else set()) - | (set(vocabs["eval"]["vocab"][0]) if "eval" in vocabs else set()) - | (set(vocabs["predict"]["vocab"][0]) if "predict" in vocabs else set()) - ) - - vocab_dict = {v: k for k, v in enumerate(sorted(vocab_set))} - - # replace white space with delimiter token - if word_delimiter_token is not None: - vocab_dict[word_delimiter_token] = vocab_dict[" "] - del vocab_dict[" "] - - # add unk and pad token - if unk_token is not None: - vocab_dict[unk_token] = len(vocab_dict) - - if pad_token is not None: - vocab_dict[pad_token] = len(vocab_dict) - - return vocab_dict - - -def main(): - # See all possible arguments in src/transformers/training_args.py - # or by passing the --help flag to this script. - # We now keep distinct sets of args, for a cleaner separation of concerns. - - parser = HfArgumentParser((ModelArguments, DataTrainingArguments, Seq2SeqTrainingArguments)) - if len(sys.argv) == 2 and sys.argv[1].endswith(".json"): - # If we pass only one argument to the script and it's the path to a json file, - # let's parse it to get our arguments. - model_args, data_args, training_args = parser.parse_json_file(json_file=os.path.abspath(sys.argv[1])) - else: - model_args, data_args, training_args = parser.parse_args_into_dataclasses() - - # Detecting last checkpoint. - last_checkpoint = None - if os.path.isdir(training_args.output_dir) and training_args.do_train and not training_args.overwrite_output_dir: - last_checkpoint = get_last_checkpoint(training_args.output_dir) - if last_checkpoint is None and len(os.listdir(training_args.output_dir)) > 0: - raise ValueError( - f"Output directory ({training_args.output_dir}) already exists and is not empty. " - "Use --overwrite_output_dir to overcome." - ) - elif last_checkpoint is not None: - logger.info( - f"Checkpoint detected, resuming training at {last_checkpoint}. To avoid this behavior, change " - "the `--output_dir` or add `--overwrite_output_dir` to train from scratch." - ) - - # Setup logging - logging.basicConfig( - format="%(asctime)s - %(levelname)s - %(name)s - %(message)s", - datefmt="%m/%d/%Y %H:%M:%S", - handlers=[logging.StreamHandler(sys.stdout)], - ) - logger.setLevel(logging.INFO if is_main_process(training_args.local_rank) else logging.WARN) - - # Log on each process the small summary: - logger.warning( - f"Process rank: {training_args.local_rank}, device: {training_args.device}, n_gpu: {training_args.n_gpu}" - f"distributed training: {bool(training_args.local_rank != -1)}, 16-bits training: {training_args.fp16}" - ) - # Set the verbosity to info of the Transformers logger (on main process only): - if is_main_process(training_args.local_rank): - transformers.utils.logging.set_verbosity_info() - logger.info("Training/evaluation parameters %s", training_args) - - # Set seed before initializing model. - set_seed(training_args.seed) - - # 1. First, let's load the dataset - raw_datasets = DatasetDict() - task_name = data_args.task - lang_id = data_args.language - - if task_name is None: - raise ValueError( - "Set --task should be set to '' (e.g. 'fleurs-asr', 'mls', 'covost2', 'minds14') " - ) - if lang_id is None: - raise ValueError( - "Set --language should be set to the language id of the sub dataset " - "config to be used (e.g. 'pl', 'en.tr', 'fr-FR') or 'all'" - " for multi-lingual fine-tuning." - ) - if data_args.language_group is not None: - if data_args.task != "fleurs-asr": - raise ValueError("--language_group should only be used with --task=fleurs-asr") - if data_args.language != "all": - raise ValueError("--language_group should only be used with --language=all") - - if data_args.target_column_name is None: - target_column_name = TASK_TO_TARGET_COLUMN_NAME[task_name] - else: - target_column_name = data_args.target_column_name - - # here we differentiate between tasks with text as the target and classification tasks - is_text_target = target_column_name in ("transcription", "translation") - - config_name = ".".join([task_name.split("-")[0], lang_id]) - - if training_args.do_train: - raw_datasets["train"] = load_dataset( - data_args.dataset_name, - config_name, - split=data_args.train_split_name, - use_auth_token=data_args.use_auth_token, - cache_dir=model_args.cache_dir, - ) - - if data_args.audio_column_name not in raw_datasets["train"].column_names: - raise ValueError( - f"--audio_column_name '{data_args.audio_column_name}' not found in dataset '{data_args.dataset_name}'." - " Make sure to set `--audio_column_name` to the correct audio column - one of" - f" {', '.join(raw_datasets['train'].column_names)}." - ) - - if target_column_name not in raw_datasets["train"].column_names: - raise ValueError( - f"--target_column_name {target_column_name} not found in dataset '{data_args.dataset_name}'. " - "Make sure to set `--target_column_name` to the correct text column - one of " - f"{', '.join(raw_datasets['train'].column_names)}." - ) - - if data_args.max_train_samples is not None: - raw_datasets["train"] = raw_datasets["train"].select(range(data_args.max_train_samples)) - - if training_args.do_eval: - raw_datasets["eval"] = load_dataset( - data_args.dataset_name, - config_name, - split=data_args.eval_split_name, - use_auth_token=data_args.use_auth_token, - cache_dir=model_args.cache_dir, - ) - - if data_args.max_eval_samples is not None: - raw_datasets["eval"] = raw_datasets["eval"].select(range(data_args.max_eval_samples)) - - if training_args.do_predict: - raw_datasets["predict"] = load_dataset( - data_args.dataset_name, - config_name, - split=data_args.predict_split_name, - use_auth_token=data_args.use_auth_token, - cache_dir=model_args.cache_dir, - ) - - if data_args.max_predict_samples is not None: - raw_datasets["predict"] = raw_datasets["predict"].select(range(data_args.max_predict_samples)) - - lang_list = next(iter(raw_datasets.values())).features["lang_id"].names - if not is_text_target: - label_list = next(iter(raw_datasets.values())).features[target_column_name].names - num_labels = len(label_list) - - num_workers = data_args.preprocessing_num_workers - - lang_group = data_args.language_group - if lang_group is not None: - with training_args.main_process_first(desc="language group filter"): - lang_group_id = next(iter(raw_datasets.values())).features["lang_group_id"].str2int(lang_group) - raw_datasets = raw_datasets.filter( - lambda lang_group: lang_group == lang_group_id, - num_proc=num_workers, - input_columns=["lang_group_id"], - ) - - # 2. We remove some special characters from the datasets - # that make training complicated and do not help in transcribing the speech - # E.g. characters, such as `,` and `.` do not really have an acoustic characteristic - # that could be easily picked up by the model - chars_to_ignore_regex = ( - f'[{"".join(data_args.chars_to_ignore)}]' if data_args.chars_to_ignore is not None else None - ) - - def remove_special_characters(batch): - if chars_to_ignore_regex is not None: - batch["target_text"] = re.sub(chars_to_ignore_regex, "", batch[target_column_name]).lower() + " " - else: - batch["target_text"] = batch[target_column_name].lower() + " " - return batch - - if is_text_target: - with training_args.main_process_first(desc="dataset map special characters removal"): - raw_datasets = raw_datasets.map( - remove_special_characters, - remove_columns=[target_column_name], - desc="remove special characters from datasets", - ) - - # save special tokens for tokenizer - word_delimiter_token = data_args.word_delimiter_token - unk_token = data_args.unk_token - pad_token = data_args.pad_token - - # 3. Next, let's load the config as we might need it to create - # the tokenizer - config = AutoConfig.from_pretrained( - model_args.model_name_or_path, cache_dir=model_args.cache_dir, use_auth_token=data_args.use_auth_token - ) - - if is_text_target: - # 4. (Optional, for ASR and translation) If no tokenizer file is defined, - # we create the vocabulary of the model by extracting all unique characters from - # the training and evaluation datasets - # We need to make sure that only first rank saves vocabulary - # make sure all processes wait until vocab is created - tokenizer_name_or_path = model_args.tokenizer_name_or_path - tokenizer_kwargs = {} - if tokenizer_name_or_path is None: - # save vocab in training output dir - tokenizer_name_or_path = training_args.output_dir - - vocab_file = os.path.join(tokenizer_name_or_path, "vocab.json") - - with training_args.main_process_first(): - if training_args.overwrite_output_dir and os.path.isfile(vocab_file): - os.remove(vocab_file) - - with training_args.main_process_first(desc="dataset map vocabulary creation"): - if not os.path.isfile(vocab_file): - os.makedirs(tokenizer_name_or_path, exist_ok=True) - vocab_dict = create_vocabulary_from_data( - raw_datasets, - word_delimiter_token=word_delimiter_token, - unk_token=unk_token, - pad_token=pad_token, - ) - - # save vocab dict to be loaded into tokenizer - with open(vocab_file, "w") as file: - json.dump(vocab_dict, file) - - # if tokenizer has just been created - # it is defined by `tokenizer_class` if present in config else by `model_type` - if not config.is_encoder_decoder: - tokenizer_kwargs = { - "config": config if config.tokenizer_class is not None else None, - "tokenizer_type": config.model_type if config.tokenizer_class is None else None, - "unk_token": unk_token, - "pad_token": pad_token, - "word_delimiter_token": word_delimiter_token, - } - else: - tokenizer_kwargs = {} - - # 5. Now we can instantiate the feature extractor, tokenizer and model - # Note for distributed training, the .from_pretrained methods guarantee that only - # one local process can concurrently download model & vocab. - - # load feature_extractor and tokenizer - if is_text_target: - tokenizer = AutoTokenizer.from_pretrained( - tokenizer_name_or_path, - use_auth_token=data_args.use_auth_token, - **tokenizer_kwargs, - ) - feature_extractor = AutoFeatureExtractor.from_pretrained( - model_args.model_name_or_path, cache_dir=model_args.cache_dir, use_auth_token=data_args.use_auth_token - ) - - # adapt config - # (speech translation requires pre-configured seq2seq models) - if task_name != "covost2": - config.update( - { - "feat_proj_dropout": model_args.feat_proj_dropout, - "attention_dropout": model_args.attention_dropout, - "hidden_dropout": model_args.hidden_dropout, - "final_dropout": model_args.final_dropout, - "mask_time_prob": model_args.mask_time_prob, - "mask_time_length": model_args.mask_time_length, - "mask_feature_prob": model_args.mask_feature_prob, - "mask_feature_length": model_args.mask_feature_length, - "gradient_checkpointing": training_args.gradient_checkpointing, - "layerdrop": model_args.layerdrop, - "ctc_zero_infinity": model_args.ctc_zero_infinity, - "ctc_loss_reduction": model_args.ctc_loss_reduction, - "activation_dropout": model_args.activation_dropout, - } - ) - if training_args.do_train: - if is_text_target: - config.pad_token_id = tokenizer.pad_token_id - config.vocab_size = len(tokenizer) - else: - label_to_id = {v: i for i, v in enumerate(label_list)} - config.label2id = label_to_id - config.id2label = {id: label for label, id in label_to_id.items()} - config.num_labels = num_labels - - # create model - if target_column_name == "transcription": - model = AutoModelForCTC.from_pretrained( - model_args.model_name_or_path, - cache_dir=model_args.cache_dir, - config=config, - use_auth_token=data_args.use_auth_token, - ) - elif config.is_encoder_decoder: - model = AutoModelForSpeechSeq2Seq.from_pretrained( - model_args.model_name_or_path, - cache_dir=model_args.cache_dir, - config=config, - use_auth_token=data_args.use_auth_token, - ) - if model.config.decoder_start_token_id is None: - raise ValueError("Make sure that `config.decoder_start_token_id` is correctly defined") - else: - model = AutoModelForAudioClassification.from_pretrained( - model_args.model_name_or_path, - cache_dir=model_args.cache_dir, - config=config, - use_auth_token=data_args.use_auth_token, - ) - - # freeze encoder - if model_args.freeze_feature_encoder: - model.freeze_feature_encoder() - - # 6. Now we preprocess the datasets including loading the audio, resampling and normalization - # Thankfully, `datasets` takes care of automatically loading and resampling the audio, - # so that we just need to set the correct target sampling rate and normalize the input - # via the `feature_extractor` - - # make sure that dataset decodes audio with correct sampling rate - dataset_sampling_rate = next(iter(raw_datasets.values())).features[data_args.audio_column_name].sampling_rate - if dataset_sampling_rate != feature_extractor.sampling_rate: - raw_datasets = raw_datasets.cast_column( - data_args.audio_column_name, datasets.features.Audio(sampling_rate=feature_extractor.sampling_rate) - ) - - # derive max & min input length for sample rate & max duration - max_input_length = data_args.max_duration_in_seconds * feature_extractor.sampling_rate - min_input_length = data_args.min_duration_in_seconds * feature_extractor.sampling_rate - audio_column_name = data_args.audio_column_name - - # `phoneme_language` is only relevant if the model is fine-tuned on phoneme classification - phoneme_language = data_args.phoneme_language - - # Preprocessing the datasets. - # We need to read the audio files as arrays and tokenize the targets. - def prepare_dataset(batch): - # load audio - sample = batch[audio_column_name] - - inputs = feature_extractor(sample["array"], sampling_rate=sample["sampling_rate"]) - batch["input_values"] = inputs.input_values[0] - batch["length"] = len(batch["input_values"]) - - # encode targets - additional_kwargs = {} - if phoneme_language is not None: - additional_kwargs["phonemizer_lang"] = phoneme_language - - if is_text_target: - batch["labels"] = tokenizer(batch["target_text"], **additional_kwargs).input_ids - else: - batch["labels"] = batch[target_column_name] - - batch["lang"] = batch["lang_id"] - - return batch - - with training_args.main_process_first(desc="dataset map preprocessing"): - vectorized_datasets = raw_datasets.map( - prepare_dataset, - remove_columns=next(iter(raw_datasets.values())).column_names, - num_proc=num_workers, - desc="preprocess datasets", - ) - - if training_args.do_train: - - def is_audio_in_length_range(length): - return length > min_input_length and length < max_input_length - - # filter data that is shorter than min_input_length - vectorized_datasets["train"] = vectorized_datasets["train"].filter( - is_audio_in_length_range, - num_proc=num_workers, - input_columns=["length"], - ) - - # 7. Next, we can prepare for the training step. - # Let's use the appropriate XTREME-S evaluation metric, - # instantiate a data collator and the trainer - - # Define evaluation metrics during training, *i.e.* word error rate, character error rate - eval_metric = load_metric("xtreme_s", task_name) - - # for large datasets it is advised to run the preprocessing on a - # single machine first with ``args.preprocessing_only`` since there will mostly likely - # be a timeout when running the script in distributed mode. - # In a second step ``args.preprocessing_only`` can then be set to `False` to load the - # cached dataset - if data_args.preprocessing_only: - logger.info(f"Data preprocessing finished. Files cached at {vectorized_datasets.cache_files}") - return - - def asr_logits_argmax(logits, labels): - return logits.argmax(dim=-1) - - def compute_asr_metric(pred): - pred.label_ids[pred.label_ids == -100] = tokenizer.pad_token_id - - pred_str = tokenizer.batch_decode(pred.predictions) - # we do not want to group tokens when computing the metrics - label_str = tokenizer.batch_decode(pred.label_ids, group_tokens=False) - - metric = eval_metric.compute(predictions=pred_str, references=label_str) - return metric - - def compute_classification_metric(pred): - pred_ids = np.argmax(pred.predictions, axis=1) - metric = eval_metric.compute(predictions=pred_ids, references=pred.label_ids) - return metric - - # Now save everything to be able to create a single processor later - if is_main_process(training_args.local_rank): - # save feature extractor, tokenizer and config - feature_extractor.save_pretrained(training_args.output_dir) - if is_text_target: - tokenizer.save_pretrained(training_args.output_dir) - config.save_pretrained(training_args.output_dir) - # wait until configs are saved in the main process before loading the processor - if training_args.local_rank != -1: - torch.distributed.barrier() - - if is_text_target: - processor = AutoProcessor.from_pretrained(training_args.output_dir) - else: - processor = AutoFeatureExtractor.from_pretrained(training_args.output_dir) - - # Instantiate custom data collator - data_collator = SpeechDataCollatorWithPadding(processor=processor, pad_labels=is_text_target) - - # Initialize Trainer - if target_column_name == "translation": - trainer = Seq2SeqTrainer( - model=model, - data_collator=data_collator, - args=training_args, - preprocess_logits_for_metrics=asr_logits_argmax if training_args.predict_with_generate else None, - compute_metrics=compute_asr_metric if training_args.predict_with_generate else None, - train_dataset=vectorized_datasets["train"] if training_args.do_train else None, - eval_dataset=vectorized_datasets["eval"] if training_args.do_eval else None, - tokenizer=feature_extractor, - ) - else: - trainer = Trainer( - model=model, - data_collator=data_collator, - args=training_args, - preprocess_logits_for_metrics=asr_logits_argmax if is_text_target else None, - compute_metrics=compute_asr_metric if is_text_target else compute_classification_metric, - train_dataset=vectorized_datasets["train"] if training_args.do_train else None, - eval_dataset=vectorized_datasets["eval"] if training_args.do_eval else None, - tokenizer=feature_extractor, - ) - - # 8. Finally, we can start training - - # Training - if training_args.do_train: - # use last checkpoint if exist - if last_checkpoint is not None: - checkpoint = last_checkpoint - elif os.path.isdir(model_args.model_name_or_path): - checkpoint = model_args.model_name_or_path - else: - checkpoint = None - - train_result = trainer.train(resume_from_checkpoint=checkpoint) - trainer.save_model() - - metrics = train_result.metrics - max_train_samples = ( - data_args.max_train_samples - if data_args.max_train_samples is not None - else len(vectorized_datasets["train"]) - ) - metrics["train_samples"] = min(max_train_samples, len(vectorized_datasets["train"])) - - trainer.log_metrics("train", metrics) - trainer.save_metrics("train", metrics) - trainer.save_state() - - # Evaluation on the test set - results = {} - if training_args.do_predict: - logger.info(f"*** Evaluating on the `{data_args.predict_split_name}` set ***") - if data_args.per_lang_metrics: - # separate the `test` dataset into language-specific subsets and compute metrics for each of them - metrics = {} - average_metrics = defaultdict(list) - for lang_id in range(len(lang_list)): - lang_name = lang_list[lang_id] - with training_args.main_process_first(desc="per-language dataset filter"): - lang_dataset = vectorized_datasets["predict"].filter( - lambda lang: lang == lang_id, - num_proc=num_workers, - input_columns=["lang"], - ) - lang_metrics = trainer.evaluate(lang_dataset) - redundant_metrics = ["eval_runtime", "eval_samples_per_second", "eval_steps_per_second", "eval_epoch"] - for metric_name, value in lang_metrics.items(): - average_metrics[metric_name].append(value) - if metric_name not in redundant_metrics: - metrics[f"{metric_name}_{lang_name}"] = value - for metric_name, value in average_metrics.items(): - metrics[metric_name] = np.mean(value) - else: - metrics = trainer.evaluate(vectorized_datasets["predict"]) - max_predict_samples = ( - data_args.max_predict_samples - if data_args.max_predict_samples is not None - else len(vectorized_datasets["predict"]) - ) - metrics["predict_samples"] = min(max_predict_samples, len(vectorized_datasets["predict"])) - - # make sure that the `predict` metrics end up in the log history for the model card - trainer.log(OrderedDict(sorted(metrics.items()))) - - trainer.log_metrics("predict", metrics) - trainer.save_metrics("predict", metrics) - - # Write model card and (optionally) push to hub - kwargs = { - "finetuned_from": model_args.model_name_or_path, - "tasks": task_name, - "tags": [task_name, data_args.dataset_name], - "dataset_args": ( - f"Config: {config_name}, Training split: {data_args.train_split_name}, Eval split:" - f" {data_args.eval_split_name}, Predict split: {data_args.predict_split_name}" - ), - "dataset": f"{data_args.dataset_name.upper()} - {config_name.upper()}", - "language": data_args.language, - } - - if training_args.push_to_hub: - trainer.push_to_hub(**kwargs) - else: - trainer.create_model_card(**kwargs) - - return results - - -if __name__ == "__main__": - main() diff --git a/spaces/chendl/compositional_test/transformers/examples/tensorflow/benchmarking/plot_csv_file.py b/spaces/chendl/compositional_test/transformers/examples/tensorflow/benchmarking/plot_csv_file.py deleted file mode 100644 index 9a9ad9c670470e1f3231d90c7fd375566e2fb8ee..0000000000000000000000000000000000000000 --- a/spaces/chendl/compositional_test/transformers/examples/tensorflow/benchmarking/plot_csv_file.py +++ /dev/null @@ -1,178 +0,0 @@ -# Copyright 2020 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import csv -from collections import defaultdict -from dataclasses import dataclass, field -from typing import List, Optional - -import matplotlib.pyplot as plt -import numpy as np -from matplotlib.ticker import ScalarFormatter - -from transformers import HfArgumentParser - - -def list_field(default=None, metadata=None): - return field(default_factory=lambda: default, metadata=metadata) - - -@dataclass -class PlotArguments: - """ - Arguments pertaining to which model/config/tokenizer we are going to fine-tune, or train from scratch. - """ - - csv_file: str = field( - metadata={"help": "The csv file to plot."}, - ) - plot_along_batch: bool = field( - default=False, - metadata={"help": "Whether to plot along batch size or sequence length. Defaults to sequence length."}, - ) - is_time: bool = field( - default=False, - metadata={"help": "Whether the csv file has time results or memory results. Defaults to memory results."}, - ) - no_log_scale: bool = field( - default=False, - metadata={"help": "Disable logarithmic scale when plotting"}, - ) - is_train: bool = field( - default=False, - metadata={ - "help": "Whether the csv file has training results or inference results. Defaults to inference results." - }, - ) - figure_png_file: Optional[str] = field( - default=None, - metadata={"help": "Filename under which the plot will be saved. If unused no plot is saved."}, - ) - short_model_names: Optional[List[str]] = list_field( - default=None, metadata={"help": "List of model names that are used instead of the ones in the csv file."} - ) - - -def can_convert_to_int(string): - try: - int(string) - return True - except ValueError: - return False - - -def can_convert_to_float(string): - try: - float(string) - return True - except ValueError: - return False - - -class Plot: - def __init__(self, args): - self.args = args - self.result_dict = defaultdict(lambda: {"bsz": [], "seq_len": [], "result": {}}) - - with open(self.args.csv_file, newline="") as csv_file: - reader = csv.DictReader(csv_file) - for row in reader: - model_name = row["model"] - self.result_dict[model_name]["bsz"].append(int(row["batch_size"])) - self.result_dict[model_name]["seq_len"].append(int(row["sequence_length"])) - if can_convert_to_int(row["result"]): - # value is not None - self.result_dict[model_name]["result"][ - (int(row["batch_size"]), int(row["sequence_length"])) - ] = int(row["result"]) - elif can_convert_to_float(row["result"]): - # value is not None - self.result_dict[model_name]["result"][ - (int(row["batch_size"]), int(row["sequence_length"])) - ] = float(row["result"]) - - def plot(self): - fig, ax = plt.subplots() - title_str = "Time usage" if self.args.is_time else "Memory usage" - title_str = title_str + " for training" if self.args.is_train else title_str + " for inference" - - if not self.args.no_log_scale: - # set logarithm scales - ax.set_xscale("log") - ax.set_yscale("log") - - for axis in [ax.xaxis, ax.yaxis]: - axis.set_major_formatter(ScalarFormatter()) - - for model_name_idx, model_name in enumerate(self.result_dict.keys()): - batch_sizes = sorted(set(self.result_dict[model_name]["bsz"])) - sequence_lengths = sorted(set(self.result_dict[model_name]["seq_len"])) - results = self.result_dict[model_name]["result"] - - (x_axis_array, inner_loop_array) = ( - (batch_sizes, sequence_lengths) if self.args.plot_along_batch else (sequence_lengths, batch_sizes) - ) - - label_model_name = ( - model_name if self.args.short_model_names is None else self.args.short_model_names[model_name_idx] - ) - - for inner_loop_value in inner_loop_array: - if self.args.plot_along_batch: - y_axis_array = np.asarray( - [results[(x, inner_loop_value)] for x in x_axis_array if (x, inner_loop_value) in results], - dtype=int, - ) - else: - y_axis_array = np.asarray( - [results[(inner_loop_value, x)] for x in x_axis_array if (inner_loop_value, x) in results], - dtype=np.float32, - ) - - (x_axis_label, inner_loop_label) = ( - ("batch_size", "len") if self.args.plot_along_batch else ("in #tokens", "bsz") - ) - - x_axis_array = np.asarray(x_axis_array, int)[: len(y_axis_array)] - plt.scatter( - x_axis_array, y_axis_array, label=f"{label_model_name} - {inner_loop_label}: {inner_loop_value}" - ) - plt.plot(x_axis_array, y_axis_array, "--") - - title_str += f" {label_model_name} vs." - - title_str = title_str[:-4] - y_axis_label = "Time in s" if self.args.is_time else "Memory in MB" - - # plot - plt.title(title_str) - plt.xlabel(x_axis_label) - plt.ylabel(y_axis_label) - plt.legend() - - if self.args.figure_png_file is not None: - plt.savefig(self.args.figure_png_file) - else: - plt.show() - - -def main(): - parser = HfArgumentParser(PlotArguments) - plot_args = parser.parse_args_into_dataclasses()[0] - plot = Plot(args=plot_args) - plot.plot() - - -if __name__ == "__main__": - main() diff --git a/spaces/chronopt-research/ViTExCo/UI.py b/spaces/chronopt-research/ViTExCo/UI.py deleted file mode 100644 index 033046d4e8709d171221bc145df3422cfeed9e64..0000000000000000000000000000000000000000 --- a/spaces/chronopt-research/ViTExCo/UI.py +++ /dev/null @@ -1,81 +0,0 @@ -import streamlit as st -from PIL import Image -import torchvision.transforms as transforms -from streamlit_image_comparison import image_comparison -import numpy as np -import torch -import torchvision - -######################################### Utils ######################################## -video_extensions = ["mp4"] -image_extensions = ["png", "jpg"] - - -def check_type(file_name: str): - for image_extension in image_extensions: - if file_name.endswith(image_extension): - return "image" - for video_extension in video_extensions: - if file_name.endswith(video_extension): - return "video" - return None - - -transform = transforms.Compose( - [transforms.Resize((256, 256)), transforms.ToTensor(), transforms.Normalize((0.485, 0.456, 0.406), (0.229, 0.224, 0.225))] -) - - -###################################### Load model ###################################### -@st.cache_resource -def load_model(): - model = torchvision.models.segmentation.deeplabv3_resnet101(pretrained=True) - model.eval() - return model - - -model = load_model() -########################################## UI ########################################## -st.title("Colorization") - -uploaded_file = st.file_uploader("Upload grayscale image or video", type=image_extensions + video_extensions) -if uploaded_file: - # Image - if check_type(file_name=uploaded_file.name) == "image": - image = np.array(Image.open(uploaded_file), dtype=np.float32) - - input_tensor = torchvision.transforms.functional.normalize( - torch.tensor(image).permute(2, 0, 1), - mean=[0.485, 0.456, 0.406], - std=[0.229, 0.224, 0.225], - ).unsqueeze(0) - process_button = st.button("Process") - if process_button: - with st.spinner("Từ từ coi..."): - prediction = model(input_tensor) - segment = prediction["out"][0].permute(1, 2, 0) - segment = segment.detach().numpy() - - st.image(segment) - st.image(image) - - image_comparison( - img1=image, - img2=np.array(segment), - label1="Grayscale", - label2="Colorized", - make_responsive=True, - show_labels=True, - ) - # Video - else: - # video = open(uploaded_file.name) - st.video("https://youtu.be/dQw4w9WgXcQ") - -hide_menu_style = """ - - """ -st.markdown(hide_menu_style, unsafe_allow_html=True) diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/Login-9c3cc0eb.css b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/Login-9c3cc0eb.css deleted file mode 100644 index 9901bcac6c93474ed045092f6d91d6e683ba5b32..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/Login-9c3cc0eb.css +++ /dev/null @@ -1 +0,0 @@ -.wrap.svelte-1ogxbi0{display:flex;flex-direction:column;justify-content:center;align-items:center;margin-top:var(--size-3);background:var(--background-fill-primary);width:var(--size-full)}h2.svelte-1ogxbi0{margin-bottom:var(--size-3);color:var(--body-text-color);font-weight:var(--section-header-text-weight);font-size:var(--text-xl)}.auth.svelte-1ogxbi0{margin-top:var(--size-1);margin-bottom:var(--size-1);color:var(--body-text-color)}.creds.svelte-1ogxbi0{margin-top:var(--size-4);margin-bottom:var(--size-4);color:var(--error-text-color);font-weight:var(--weight-semibold)} diff --git a/spaces/cihyFjudo/fairness-paper-search/HACK MiniTool Partition Wizard Server V8.1.1 Retail Incl Keygen-BRD [Review and Tutorial].md b/spaces/cihyFjudo/fairness-paper-search/HACK MiniTool Partition Wizard Server V8.1.1 Retail Incl Keygen-BRD [Review and Tutorial].md deleted file mode 100644 index 1de1789bd496337919d1aec64b659839a06e886e..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/HACK MiniTool Partition Wizard Server V8.1.1 Retail Incl Keygen-BRD [Review and Tutorial].md +++ /dev/null @@ -1,6 +0,0 @@ -

crack Keygen PowerShape 2011 crack


Download Ziphttps://tinurli.com/2uwjuI



- - aaccfb2cb3
-
-
-

diff --git a/spaces/cihyFjudo/fairness-paper-search/Thunderbolt 3 is coming The future of connectivity and productivity.md b/spaces/cihyFjudo/fairness-paper-search/Thunderbolt 3 is coming The future of connectivity and productivity.md deleted file mode 100644 index 421402d73113cb8769c645ad858bd6952fb06da8..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Thunderbolt 3 is coming The future of connectivity and productivity.md +++ /dev/null @@ -1,7 +0,0 @@ -
-

OWC has announced that its upcoming Thunderbolt Hub will be compatible with all Apple M1 and Intel Macs equipped with Thunderbolt 3 ports and running macOS Big Sur, offering users the ability to expand the number of available Thunderbolt ports.

-

VESA has announced today that its DisplayPort 2.0 specs are coming to USB4/USB-C that will bring a jump in the capabilities of video output. The standard will support up to 16K displays with video data throughput of up to 80 Gbps.

-

Thunderbolt 3 is coming


Download Ziphttps://tinurli.com/2uwhZI



-

AJA Software Installer v16.2.3\u00a0for\u00a0macOS:\r\n\r\nThis unified software, driver and firmware package contains everything you need in order to start using your AJA video I\/O hardware and includes enhancements.\u00a0Please read the\u00a0Release Notes\u00a0for complete detail.\u00a0\r\n\r\nThe following AJA applications are installed in v16.2.3\r\n\r\n

    \r\n\t
  • AJA Control Panel v16.2.3:\u00a0\r\n\r\n\t
      \r\n\t\t
    • For setup and control of your AJA KONA, Io or T-TAP product, including firmware updates.\r\n\t\t
    • For high quality capture, playback and output.\r\n\t\r\n\t\r\n\t
    • AJA System Test v16.2.3:\u00a0\r\n\t
        \r\n\t\t
      • For testing storage to determine if it will be possible to sustain frame rates for chosen format(s).\r\n\t\r\n\t\r\n\t
      • AJA Control Room v16.2.3:\r\n\t
          \r\n\t\t
        • For high quality capture, monitoring, playback and output.\r\n\t\r\n\t\r\n\t
        • AJA\u00a0NMOS\u00a0v16.2.3:\u00a0\r\n\t
            \r\n\t\t
          • An optional component providing discovery, registration and control for KONA IP running in\u00a0SMPTE\u00a0ST 2110 environments.\r\n\t\r\n\t\r\n\r\n\r\n\u00a0\r\n","product":0,"category":"so","platform_mac":1,"platform_pc":0,"platform_linux":0,"platform":"m","showdate":"y","released":"2\/7\/2023","url":"","ziparchive":true,"files":["file":"AJA-Software-Installer_macOS_v16.2.3_Release.zip","abspath":"\/var\/www\/sierra\/releases\/20230118233302\/webroot\/public\/assets\/support\/files\/8994\/en\/AJA-Software-Installer_macOS_v16.2.3_Release.zip","urlpath":"https:\/\/www.aja.com\/assets\/support\/files\/8994\/en\/","ext":"zip"]}],"p":["id":8997,"title":"AJA Software Installer v16.2.3 - Windows","description":"AJA Software Installer v16.2.3 for Windows:\r\n\r\nThis unified software, driver and firmware package contains everything you need in order to start using your AJA video I\/O hardware and includes new features as well as\u00a0maintenance updates.\u00a0Please read the Release Notes\u00a0for complete detail.\u00a0\r\n\r\nThe following AJA applications are installed in v16.2.3\r\n\r\n
              \r\n\t
            • AJA Control Panel v16.2.3:\u00a0\r\n\r\n\t
                \r\n\t\t
              • For setup and control of your AJA KONA, Io or T-TAP product, including firmware updates.\r\n\t\t
              • For high quality capture, playback and output.\r\n\t\r\n\t\r\n\t
              • AJA System Test v16.2.3:\u00a0\r\n\t
                  \r\n\t\t
                • For testing storage to determine if it will be possible to sustain frame rates for chosen format(s).\r\n\t\r\n\t\r\n\t
                • AJA Control Room v16.2.3:\r\n\t
                    \r\n\t\t
                  • For high quality capture, monitoring, playback and output.\r\n\t\r\n\t\r\n\t
                  • AJA\u00a0NMOS\u00a0v16.2.3:\u00a0\r\n\t
                      \r\n\t\t
                    • An optional component providing discovery, registration and control for KONA IP running in\u00a0SMPTE\u00a0ST 2110 environments.\r\n\t\r\n\t\r\n\r\n","product":0,"category":"so","platform_mac":0,"platform_pc":1,"platform_linux":0,"platform":"m","showdate":"y","released":"2\/3\/2023","url":"","ziparchive":true,"files":["file":"AJA-Software-Installer_Windows_v16.2.3_Release.zip","abspath":"\/var\/www\/sierra\/releases\/20230118233302\/webroot\/public\/assets\/support\/files\/8997\/en\/AJA-Software-Installer_Windows_v16.2.3_Release.zip","urlpath":"https:\/\/www.aja.com\/assets\/support\/files\/8997\/en\/","ext":"zip"],"id":7132,"title":"DirectShow Filters v15.5.2","description":"These filters are designed for DirectShow application developers, so that they may make their applications compatible with the majority of AJA's windows based hardware. These filters are meant to be used programmatically, and will not necessarily work with every DirectShow application without work being done to explicitly support them. Note: DirectShow is a trademark of Microsoft.\r\n\r\n\u00a0\r\n","product":0,"category":"so","platform_mac":0,"platform_pc":1,"platform_linux":0,"platform":"m","showdate":"y","released":"4\/23\/2020","url":"https:\/\/www.aja.com\/support\/directshow","ziparchive":true,"files":["file":"AJA_DirectShowPackage_15_5_2.zip","abspath":"\/var\/www\/sierra\/releases\/20230118233302\/webroot\/public\/assets\/support\/files\/7132\/en\/AJA_DirectShowPackage_15_5_2.zip","urlpath":"https:\/\/www.aja.com\/assets\/support\/files\/7132\/en\/","ext":"zip"]],"l":[]},"mac_count":1,"pc_count":2,"linux_count":0,"load_count":3,"show_links":false,"m_load":1,"p_load":1,"l_load":1,"m_cached":true,"p_cached":true,"l_cached":true},"tag":"sa","title":"Software Archive","items":"m":["id":8745,"title":"AJA Software Installer v16.2.2 - Mac","description":"AJA Software Installer v16.2.2 for\u00a0macOS:\r\n\r\nThis unified software, driver and firmware package contains everything you need in order to start using your AJA video I\/O hardware and includes enhancements.\u00a0Please read the\u00a0Release Notes\u00a0for complete detail.\u00a0\r\n\r\nThe following AJA applications are installed in v16.2.2\r\n\r\n
                        \r\n\t
                      • AJA Control Panel v16.2.2:\u00a0\r\n\r\n\t
                          \r\n\t\t
                        • For setup and control of your AJA KONA, Io or T-TAP product, including firmware updates.\r\n\t\t
                        • For high quality capture, playback and output.\r\n\t\r\n\t\r\n\t
                        • AJA System Test v16.2.2:\u00a0\r\n\t
                            \r\n\t\t
                          • For testing storage to determine if it will be possible to sustain frame rates for chosen format(s).\r\n\t\r\n\t\r\n\t
                          • AJA Control Room v16.2.2:\r\n\t
                              \r\n\t\t
                            • For high quality capture, monitoring, playback and output.\r\n\t\r\n\t\r\n\t
                            • AJA\u00a0NMOS\u00a0v16.2.2:\u00a0\r\n\t
                                \r\n\t\t
                              • An optional component providing discovery, registration and control for KONA IP running in\u00a0SMPTE\u00a0ST 2110 environments.\r\n\t\r\n\t\r\n\r\n\r\n\u00a0\r\n","product":0,"category":"sa","platform_mac":1,"platform_pc":0,"platform_linux":0,"platform":"m","showdate":"y","released":"6\/14\/2022","url":"","ziparchive":true,"files":["file":"AJA-Software-Installer_macOS_v16.2.2_Release.zip","abspath":"\/var\/www\/sierra\/releases\/20230118233302\/webroot\/public\/assets\/support\/files\/8745\/en\/AJA-Software-Installer_macOS_v16.2.2_Release.zip","urlpath":"https:\/\/www.aja.com\/assets\/support\/files\/8745\/en\/","ext":"zip"],"id":8508,"title":"AJA Software Installer v16.2 - Mac","description":"AJA Software Installer v16.2\u00a0for\u00a0macOS:\r\n\r\nThis unified software, driver and firmware package contains everything you need in order to start using your AJA video I\/O hardware and includes enhancements.\u00a0Please read the\u00a0Release Notes\u00a0for complete detail.\u00a0\r\n\r\nThe following AJA applications are installed in v16.2\r\n\r\n
                                  \r\n\t
                                • AJA Control Panel v16.2:\u00a0\r\n\r\n\t
                                    \r\n\t\t
                                  • For setup and control of your AJA KONA, Io or T-TAP product, including firmware updates.\r\n\t\t
                                  • For high quality capture, playback and output.\r\n\t\r\n\t\r\n\t
                                  • AJA System Test v16.2:\u00a0\r\n\t
                                      \r\n\t\t
                                    • For testing storage to determine if it will be possible to sustain frame rates for chosen format(s).\r\n\t\r\n\t\r\n\t
                                    • AJA Control Room v16.2:\r\n\t
                                        \r\n\t\t
                                      • For high quality capture, monitoring, playback and output.\r\n\t\r\n\t\r\n\t
                                      • AJA\u00a0NMOS\u00a0v16.2:\u00a0\r\n\t
                                          \r\n\t\t
                                        • An optional component providing discovery, registration and control for KONA IP running in\u00a0SMPTE\u00a0ST 2110 environments.\r\n\t\r\n\t\r\n\r\n\r\n\u00a0\r\n","product":0,"category":"sa","platform_mac":1,"platform_pc":0,"platform_linux":0,"platform":"m","showdate":"y","released":"2\/3\/2022","url":"","ziparchive":true,"files":["file":"AJA-Software-Installer_macOS_v16.2_Release.zip","abspath":"\/var\/www\/sierra\/releases\/20230118233302\/webroot\/public\/assets\/support\/files\/8508\/en\/AJA-Software-Installer_macOS_v16.2_Release.zip","urlpath":"https:\/\/www.aja.com\/assets\/support\/files\/8508\/en\/","ext":"zip"],"id":8316,"title":"AJA Software Installer v16.1 - Mac","description":"AJA Software Installer v16.1\u00a0for\u00a0macOS:\r\n\r\nThis unified software, driver and firmware package contains everything you need in order to start using your AJA video I\/O hardware and includes enhancements.\u00a0Please read the\u00a0Release Notes\u00a0for complete detail.\u00a0\r\n\r\nThe following AJA applications are installed in v16.1\r\n\r\n
                                            \r\n\t
                                          • AJA Control Panel v16.1:\u00a0\r\n\r\n\t
                                              \r\n\t\t
                                            • For setup and control of your AJA KONA, Io or T-TAP product, including firmware updates.\r\n\t\t
                                            • For high quality capture, playback and output.\r\n\t\r\n\t\r\n\t
                                            • AJA System Test v16.1:\u00a0\r\n\t
                                                \r\n\t\t
                                              • For testing storage to determine if it will be possible to sustain frame rates for chosen format(s).\r\n\t\r\n\t\r\n\t
                                              • AJA Control Room v16.1:\r\n\t
                                                  \r\n\t\t
                                                • For high quality capture, monitoring, playback and output.\r\n\t\r\n\t\r\n\t
                                                • AJA\u00a0NMOS\u00a0v16.1:\u00a0\r\n\t
                                                    \r\n\t\t
                                                  • An optional component providing discovery, registration and control for KONA IP running in\u00a0SMPTE\u00a0ST 2110 environments.\r\n\t\r\n\t\r\n\r\n\r\n\u00a0\r\n","product":0,"category":"sa","platform_mac":1,"platform_pc":0,"platform_linux":0,"platform":"m","showdate":"y","released":"7\/28\/2021","url":"","ziparchive":true,"files":["file":"AJA-Software-Installer_macOS_v16.1_Release.zip","abspath":"\/var\/www\/sierra\/releases\/20230118233302\/webroot\/public\/assets\/support\/files\/8316\/en\/AJA-Software-Installer_macOS_v16.1_Release.zip","urlpath":"https:\/\/www.aja.com\/assets\/support\/files\/8316\/en\/","ext":"zip"],"id":8310,"title":"AJA Software Installer v16.0.3 - Mac","description":"AJA Software Installer v16.0.3\u00a0for\u00a0macOS:\r\n\r\nThis unified software, driver and firmware package contains everything you need in order to start using your AJA video I\/O hardware and includes enhancements.\u00a0Please read the\u00a0Release Notes\u00a0for complete detail.\u00a0\r\n\r\nThe following AJA applications are installed in v16.0.3\r\n\r\n
                                                      \r\n\t
                                                    • AJA Control Panel v16.0.3:\u00a0\r\n\r\n\t
                                                        \r\n\t\t
                                                      • For setup and control of your AJA KONA, Io or T-TAP product, including firmware updates.\r\n\t\t
                                                      • For high quality capture, playback and output.\r\n\t\r\n\t\r\n\t
                                                      • AJA System Test v16.0.3:\u00a0\r\n\t
                                                          \r\n\t\t
                                                        • For testing storage to determine if it will be possible to sustain frame rates for chosen format(s).\r\n\t\r\n\t\r\n\t
                                                        • AJA Control Room v16.0.3:\r\n\t
                                                            \r\n\t\t
                                                          • For high quality capture, monitoring, playback and output.\r\n\t\r\n\t\r\n\t
                                                          • AJA\u00a0NMOS\u00a0v16.0.3:\u00a0\r\n\t
                                                              \r\n\t\t
                                                            • An optional component providing discovery, registration and control for KONA IP running in\u00a0SMPTE\u00a0ST 2110 environments.\r\n\t\r\n\t\r\n\r\n\r\n\u00a0\r\n","product":0,"category":"sa","platform_mac":1,"platform_pc":0,"platform_linux":0,"platform":"m","showdate":"y","released":"7\/20\/2021","url":"","ziparchive":true,"files":["file":"AJA-Software-Installer_macOS_v16.0.3_Release.zip","abspath":"\/var\/www\/sierra\/releases\/20230118233302\/webroot\/public\/assets\/support\/files\/8310\/en\/AJA-Software-Installer_macOS_v16.0.3_Release.zip","urlpath":"https:\/\/www.aja.com\/assets\/support\/files\/8310\/en\/","ext":"zip"],"id":8170,"title":"AJA Software Installer v16.0.2 - Mac","description":"AJA Software Installer v16.0.2\u00a0for\u00a0macOS:\r\n\r\nThis unified software, driver and firmware package contains everything you need in order to start using your AJA video I\/O hardware and includes enhancements.\u00a0Please read the\u00a0Release Notes\u00a0for complete detail.\u00a0\r\n\r\nThe following AJA applications are installed in v16.0.2\r\n\r\n
                                                                \r\n\t
                                                              • AJA Control Panel v16.0.2:\u00a0\r\n\r\n\t
                                                                  \r\n\t\t
                                                                • For setup and control of your AJA KONA, Io or T-TAP product, including firmware updates.\r\n\t\t
                                                                • For high quality capture, playback and output.\r\n\t\r\n\t\r\n\t
                                                                • AJA System Test v16.0.2:\u00a0\r\n\t
                                                                    \r\n\t\t
                                                                  • For testing storage to determine if it will be possible to sustain frame rates for chosen format(s).\r\n\t\r\n\t\r\n\t
                                                                  • AJA Control Room v16.0.2:\r\n\t
                                                                      \r\n\t\t
                                                                    • For high quality capture, monitoring, playback and output.\r\n\t\r\n\t\r\n\t
                                                                    • AJA\u00a0NMOS\u00a0v16.0.2:\u00a0\r\n\t
                                                                        \r\n\t\t
                                                                      • An optional component providing discovery, registration and control for KONA IP running in\u00a0SMPTE\u00a0ST 2110 environments.\r\n\t\r\n\t\r\n\r\n\r\n\u00a0\r\n","product":0,"category":"sa","platform_mac":1,"platform_pc":0,"platform_linux":0,"platform":"m","showdate":"y","released":"4\/20\/2021","url":"","ziparchive":true,"files":["file":"AJA-Software-Installer_macOS_v16.0.2_Release.zip","abspath":"\/var\/www\/sierra\/releases\/20230118233302\/webroot\/public\/assets\/support\/files\/8170\/en\/AJA-Software-Installer_macOS_v16.0.2_Release.zip","urlpath":"https:\/\/www.aja.com\/assets\/support\/files\/8170\/en\/","ext":"zip"],"id":7947,"title":"AJA Software Installer v16 - Mac","description":"AJA Software Installer v16 for\u00a0macOS:\r\n\r\nThis unified software, driver and firmware package contains everything you need in order to start using your AJA video I\/O hardware and includes enhancements. New features for a range of I\/O products\u00a0include\u00a0HDR over SDI, HDR Auto Playback Detection, 4K Closed Caption Support, NMO3 v1.3 Support,\u00a0LLDP\u00a0Support,\u00a08K NLE\/VFX Software Support,\u00a08K Capture and Playback and Dynamic\u00a0FPGA\u00a0Firmware Reconfiguration. Please read the\u00a0Release Notes\u00a0for complete detail.\u00a0\r\n\r\nThe following AJA applications are installed in v16\r\n\r\n
                                                                          \r\n\t
                                                                        • AJA Control Panel v16:\u00a0\r\n\r\n\t
                                                                            \r\n\t\t
                                                                          • For setup and control of your AJA KONA, Io or T-TAP product, including firmware updates.\r\n\t\t
                                                                          • For high quality capture, playback and output.\r\n\t\r\n\t\r\n\t
                                                                          • AJA System Test v16:\u00a0\r\n\t
                                                                              \r\n\t\t
                                                                            • For testing storage to determine if it will be possible to sustain frame rates for chosen format(s).\r\n\t\r\n\t\r\n\t
                                                                            • AJA Control Room v16:\r\n\t
                                                                                \r\n\t\t
                                                                              • For high quality capture, monitoring, playback and output.\r\n\t\r\n\t\r\n\t
                                                                              • AJA\u00a0NMOS\u00a0v16:\u00a0\r\n\t
                                                                                  \r\n\t\t
                                                                                • An optional component providing discovery, registration and control for KONA IP running in\u00a0SMPTE\u00a0ST 2110 environments.\r\n\t\r\n\t\r\n\r\n\r\n\u00a0\r\n","product":0,"category":"sa","platform_mac":1,"platform_pc":0,"platform_linux":0,"platform":"m","showdate":"y","released":"3\/2\/2021","url":"","ziparchive":true,"files":["file":"AJA-Software-Installer_macOS_v16.0_Release.zip","abspath":"\/var\/www\/sierra\/releases\/20230118233302\/webroot\/public\/assets\/support\/files\/7947\/en\/AJA-Software-Installer_macOS_v16.0_Release.zip","urlpath":"https:\/\/www.aja.com\/assets\/support\/files\/7947\/en\/","ext":"zip"]],"p":["id":8748,"title":"AJA Software Installer v16.2.2 - Windows","description":"AJA Software Installer v16.2.2 for Windows:\r\n\r\nThis unified software, driver and firmware package contains everything you need in order to start using your AJA video I\/O hardware and includes new features including as well as\u00a0maintenance updates.\u00a0Please read the Release Notes\u00a0for complete detail.\u00a0\r\n\r\nThe following AJA applications are installed in v16.2.2\r\n\r\n
                                                                                    \r\n\t
                                                                                  • AJA Control Panel v16.2.2:\u00a0\r\n\r\n\t
                                                                                      \r\n\t\t
                                                                                    • For setup and control of your AJA KONA, Io or T-TAP product, including firmware updates.\r\n\t\t
                                                                                    • For high quality capture, playback and output.\r\n\t\r\n\t\r\n\t
                                                                                    • AJA System Test v16.2.2:\u00a0\r\n\t
                                                                                        \r\n\t\t
                                                                                      • For testing storage to determine if it will be possible to sustain frame rates for chosen format(s).\r\n\t\r\n\t\r\n\t
                                                                                      • AJA Control Room v16.2.2:\r\n\t
                                                                                          \r\n\t\t
                                                                                        • For high quality capture, monitoring, playback and output.\r\n\t\r\n\t\r\n\t
                                                                                        • AJA\u00a0NMOS\u00a0v16.2.2:\u00a0\r\n\t
                                                                                            \r\n\t\t
                                                                                          • An optional component providing discovery, registration and control for KONA IP running in\u00a0SMPTE\u00a0ST 2110 environments.\r\n\t\r\n\t\r\n\r\n","product":0,"category":"sa","platform_mac":0,"platform_pc":1,"platform_linux":0,"platform":"m","showdate":"y","released":"6\/14\/2022","url":"","ziparchive":true,"files":["file":"AJA-Software-Installer_Windows_v16.2.2_Release.zip","abspath":"\/var\/www\/sierra\/releases\/20230118233302\/webroot\/public\/assets\/support\/files\/8748\/en\/AJA-Software-Installer_Windows_v16.2.2_Release.zip","urlpath":"https:\/\/www.aja.com\/assets\/support\/files\/8748\/en\/","ext":"zip"],"id":8505,"title":"AJA Software Installer v16.2 - Windows","description":"AJA Software Installer v16.2\u00a0for Windows:\r\n\r\nThis unified software, driver and firmware package contains everything you need in order to start using your AJA video I\/O hardware and includes new features including as well as\u00a0maintenance updates.\u00a0Please read the Release Notes\u00a0for complete detail.\u00a0\r\n\r\nThe following AJA applications are installed in v16.2\r\n\r\n
                                                                                              \r\n\t
                                                                                            • AJA Control Panel v16.2:\u00a0\r\n\r\n\t
                                                                                                \r\n\t\t
                                                                                              • For setup and control of your AJA KONA, Io or T-TAP product, including firmware updates.\r\n\t\t
                                                                                              • For high quality capture, playback and output.\r\n\t\r\n\t\r\n\t
                                                                                              • AJA System Test v16.2:\u00a0\r\n\t
                                                                                                  \r\n\t\t
                                                                                                • For testing storage to determine if it will be possible to sustain frame rates for chosen format(s).\r\n\t\r\n\t\r\n\t
                                                                                                • AJA Control Room v16.2:\r\n\t
                                                                                                    \r\n\t\t
                                                                                                  • For high quality capture, monitoring, playback and output.\r\n\t\r\n\t\r\n\t
                                                                                                  • AJA\u00a0NMOS\u00a0v16.2:\u00a0\r\n\t
                                                                                                      \r\n\t\t
                                                                                                    • An optional component providing discovery, registration and control for KONA IP running in\u00a0SMPTE\u00a0ST 2110 environments.\r\n\t\r\n\t\r\n\r\n","product":0,"category":"sa","platform_mac":0,"platform_pc":1,"platform_linux":0,"platform":"m","showdate":"y","released":"2\/3\/2022","url":"","ziparchive":true,"files":["file":"AJA-Software-Installer_Windows_v16.2_Release.zip","abspath":"\/var\/www\/sierra\/releases\/20230118233302\/webroot\/public\/assets\/support\/files\/8505\/en\/AJA-Software-Installer_Windows_v16.2_Release.zip","urlpath":"https:\/\/www.aja.com\/assets\/support\/files\/8505\/en\/","ext":"zip"],"id":8319,"title":"AJA Software Installer v16.1 - Windows","description":"AJA Software Installer v16.1\u00a0for Windows:\r\n\r\nThis unified software, driver and firmware package contains everything you need in order to start using your AJA video I\/O hardware and includes new features including as well as\u00a0maintenance updates.\u00a0Please read the Release Notes\u00a0for complete detail.\u00a0\r\n\r\nThe following AJA applications are installed in v16.1\r\n\r\n
                                                                                                        \r\n\t
                                                                                                      • AJA Control Panel v16.1:\u00a0\r\n\r\n\t
                                                                                                          \r\n\t\t
                                                                                                        • For setup and control of your AJA KONA, Io or T-TAP product, including firmware updates.\r\n\t\t
                                                                                                        • For high quality capture, playback and output.\r\n\t\r\n\t\r\n\t
                                                                                                        • AJA System Test v16.1:\u00a0\r\n\t
                                                                                                            \r\n\t\t
                                                                                                          • For testing storage to determine if it will be possible to sustain frame rates for chosen format(s).\r\n\t\r\n\t\r\n\t
                                                                                                          • AJA Control Room v16.1:\r\n\t
                                                                                                              \r\n\t\t
                                                                                                            • For high quality capture, monitoring, playback and output.\r\n\t\r\n\t\r\n\t
                                                                                                            • AJA\u00a0NMOS\u00a0v16.1:\u00a0\r\n\t
                                                                                                                \r\n\t\t
                                                                                                              • An optional component providing discovery, registration and control for KONA IP running in\u00a0SMPTE\u00a0ST 2110 environments.\r\n\t\r\n\t\r\n\r\n","product":0,"category":"sa","platform_mac":0,"platform_pc":1,"platform_linux":0,"platform":"m","showdate":"y","released":"7\/28\/2021","url":"","ziparchive":true,"files":["file":"AJA-Software-Installer_Windows_v16.1_Release.zip","abspath":"\/var\/www\/sierra\/releases\/20230118233302\/webroot\/public\/assets\/support\/files\/8319\/en\/AJA-Software-Installer_Windows_v16.1_Release.zip","urlpath":"https:\/\/www.aja.com\/assets\/support\/files\/8319\/en\/","ext":"zip"],"id":8307,"title":"AJA Software Installer v16.0.3 - Windows","description":"AJA Software Installer v16.0.3 for Windows:\r\n\r\nThis unified software, driver and firmware package contains everything you need in order to start using your AJA video I\/O hardware and includes new features including as well as\u00a0maintenance updates.\u00a0Please read the Release Notes\u00a0for complete detail.\u00a0\r\n\r\nThe following AJA applications are installed in v16.0.3\r\n\r\n
                                                                                                                  \r\n\t
                                                                                                                • AJA Control Panel v16.0.3:\u00a0\r\n\r\n\t
                                                                                                                    \r\n\t\t
                                                                                                                  • For setup and control of your AJA KONA, Io or T-TAP product, including firmware updates.\r\n\t\t
                                                                                                                  • For high quality capture, playback and output.\r\n\t\r\n\t\r\n\t
                                                                                                                  • AJA System Test v16.0.3:\u00a0\r\n\t
                                                                                                                      \r\n\t\t
                                                                                                                    • For testing storage to determine if it will be possible to sustain frame rates for chosen format(s).\r\n\t\r\n\t\r\n\t
                                                                                                                    • AJA Control Room v16.0.3:\r\n\t
                                                                                                                        \r\n\t\t
                                                                                                                      • For high quality capture, monitoring, playback and output.\r\n\t\r\n\t\r\n\t
                                                                                                                      • AJA\u00a0NMOS\u00a0v16.0.3:\u00a0\r\n\t
                                                                                                                          \r\n\t\t
                                                                                                                        • An optional component providing discovery, registration and control for KONA IP running in\u00a0SMPTE\u00a0ST 2110 environments.\r\n\t\r\n\t\r\n\r\n","product":0,"category":"sa","platform_mac":0,"platform_pc":1,"platform_linux":0,"platform":"m","showdate":"y","released":"7\/20\/2021","url":"","ziparchive":true,"files":["file":"AJA-Software-Installer_Windows_v16.0.3_Release.zip","abspath":"\/var\/www\/sierra\/releases\/20230118233302\/webroot\/public\/assets\/support\/files\/8307\/en\/AJA-Software-Installer_Windows_v16.0.3_Release.zip","urlpath":"https:\/\/www.aja.com\/assets\/support\/files\/8307\/en\/","ext":"zip"],"id":8167,"title":"AJA Software Installer v16.0.2 - Windows","description":"AJA Software Installer v16.0.2 for Windows:\r\n\r\nThis unified software, driver and firmware package contains everything you need in order to start using your AJA video I\/O hardware and includes new features including as well as\u00a0maintenance updates.\u00a0Please read the Release Notes\u00a0for complete detail.\u00a0\r\n\r\nThe following AJA applications are installed in v16.0.2\r\n\r\n
                                                                                                                            \r\n\t
                                                                                                                          • AJA Control Panel v16.0.2:\u00a0\r\n\r\n\t
                                                                                                                              \r\n\t\t
                                                                                                                            • For setup and control of your AJA KONA, Io or T-TAP product, including firmware updates.\r\n\t\t
                                                                                                                            • For high quality capture, playback and output.\r\n\t\r\n\t\r\n\t
                                                                                                                            • AJA System Test v16.0.2:\u00a0\r\n\t
                                                                                                                                \r\n\t\t
                                                                                                                              • For testing storage to determine if it will be possible to sustain frame rates for chosen format(s).\r\n\t\r\n\t\r\n\t
                                                                                                                              • AJA Control Room v16.0.2:\r\n\t
                                                                                                                                  \r\n\t\t
                                                                                                                                • For high quality capture, monitoring, playback and output.\r\n\t\r\n\t\r\n\t
                                                                                                                                • AJA\u00a0NMOS\u00a0v16.0.2:\u00a0\r\n\t
                                                                                                                                    \r\n\t\t
                                                                                                                                  • An optional component providing discovery, registration and control for KONA IP running in\u00a0SMPTE\u00a0ST 2110 environments.\r\n\t\r\n\t\r\n\r\n","product":0,"category":"sa","platform_mac":0,"platform_pc":1,"platform_linux":0,"platform":"m","showdate":"y","released":"4\/20\/2021","url":"","ziparchive":true,"files":["file":"AJA-Software-Installer_Windows_v16.0.2_Release.zip","abspath":"\/var\/www\/sierra\/releases\/20230118233302\/webroot\/public\/assets\/support\/files\/8167\/en\/AJA-Software-Installer_Windows_v16.0.2_Release.zip","urlpath":"https:\/\/www.aja.com\/assets\/support\/files\/8167\/en\/","ext":"zip"],"id":7944,"title":"AJA Software Installer v16 - Windows","description":"AJA Software Installer v16 for Windows:\r\n\r\nThis unified software, driver and firmware package contains everything you need in order to start using your AJA video I\/O hardware and includes new features as well as\u00a0maintenance updates. New features for a range of I\/O products\u00a0include\u00a0HDR over SDI, HDR Auto Playback Detection, 4K Closed Caption Support, NMO3 v1.3 Support, LLDP Support,\u00a08K NLE\/VFX Software Support,\u00a08K Capture and Playback and Dynamic FPGA Firmware Reconfiguration.\u00a0Please read the Release Notes\u00a0for complete detail.\u00a0\r\n\r\nThe following AJA applications are installed in v16\r\n\r\n
                                                                                                                                      \r\n\t
                                                                                                                                    • AJA Control Panel v16:\u00a0\r\n\r\n\t
                                                                                                                                        \r\n\t\t
                                                                                                                                      • For setup and control of your AJA KONA, Io or T-TAP product, including firmware updates.\r\n\t\t
                                                                                                                                      • For high quality capture, playback and output.\r\n\t\r\n\t\r\n\t
                                                                                                                                      • AJA System Test v16:\u00a0\r\n\t
                                                                                                                                          \r\n\t\t
                                                                                                                                        • For testing storage to determine if it will be possible to sustain frame rates for chosen format(s).\r\n\t\r\n\t\r\n\t
                                                                                                                                        • AJA Control Room v16:\r\n\t
                                                                                                                                            \r\n\t\t
                                                                                                                                          • For high quality capture, monitoring, playback and output.\r\n\t\r\n\t\r\n\t
                                                                                                                                          • AJA\u00a0NMOS\u00a0v16:\u00a0\r\n\t
                                                                                                                                              \r\n\t\t
                                                                                                                                            • An optional component providing discovery, registration and control for KONA IP running in\u00a0SMPTE\u00a0ST 2110 environments.\r\n\t\r\n\t\r\n\r\n","product":0,"category":"sa","platform_mac":0,"platform_pc":1,"platform_linux":0,"platform":"m","showdate":"y","released":"3\/2\/2021","url":"","ziparchive":true,"files":["file":"AJA-Software-Installer_Windows_v16.0_Release.zip","abspath":"\/var\/www\/sierra\/releases\/20230118233302\/webroot\/public\/assets\/support\/files\/7944\/en\/AJA-Software-Installer_Windows_v16.0_Release.zip","urlpath":"https:\/\/www.aja.com\/assets\/support\/files\/7944\/en\/","ext":"zip"]],"l":[],"mac_count":6,"pc_count":6,"linux_count":0,"load_count":12,"show_links":false,"m_load":1,"p_load":1,"l_load":1,"m_cached":true,"p_cached":true,"l_cached":true,"tag":"ma","title":"Manuals","items":"m":["id":8514,"title":"AJA T-TAP Pro Manual v16.2","description":"AJA T-TAP Pro Manual v16.2\r\n","product":0,"category":"ma","platform_mac":1,"platform_pc":1,"platform_linux":0,"platform":"m","showdate":"y","released":"2\/3\/2022","url":"","ziparchive":false,"files":["file":"AJA_Manual_T-TAP-Pro_v16.2.pdf","abspath":"\/var\/www\/sierra\/releases\/20230118233302\/webroot\/public\/assets\/support\/files\/8514\/en\/AJA_Manual_T-TAP-Pro_v16.2.pdf","urlpath":"https:\/\/www.aja.com\/assets\/support\/files\/8514\/en\/","ext":"pdf"],"id":8511,"title":"AJA Control Room Manual v16.2","description":"AJA Control Room Manual v16.2\r\n","product":0,"category":"ma","platform_mac":1,"platform_pc":1,"platform_linux":1,"platform":"m","showdate":"y","released":"2\/3\/2022","url":"","ziparchive":false,"files":["file":"AJA_Manual_ControlRoom_v16.2.pdf","abspath":"\/var\/www\/sierra\/releases\/20230118233302\/webroot\/public\/assets\/support\/files\/8511\/en\/AJA_Manual_ControlRoom_v16.2.pdf","urlpath":"https:\/\/www.aja.com\/assets\/support\/files\/8511\/en\/","ext":"pdf"]],"p":["id":8514,"title":"AJA T-TAP Pro Manual v16.2","description":"AJA T-TAP Pro Manual v16.2\r\n","product":0,"category":"ma","platform_mac":1,"platform_pc":1,"platform_linux":0,"platform":"m","showdate":"y","released":"2\/3\/2022","url":"","ziparchive":false,"files":["file":"AJA_Manual_T-TAP-Pro_v16.2.pdf","abspath":"\/var\/www\/sierra\/releases\/20230118233302\/webroot\/public\/assets\/support\/files\/8514\/en\/AJA_Manual_T-TAP-Pro_v16.2.pdf","urlpath":"https:\/\/www.aja.com\/assets\/support\/files\/8514\/en\/","ext":"pdf"],"id":8511,"title":"AJA Control Room Manual v16.2","description":"AJA Control Room Manual v16.2\r\n","product":0,"category":"ma","platform_mac":1,"platform_pc":1,"platform_linux":1,"platform":"m","showdate":"y","released":"2\/3\/2022","url":"","ziparchive":false,"files":["file":"AJA_Manual_ControlRoom_v16.2.pdf","abspath":"\/var\/www\/sierra\/releases\/20230118233302\/webroot\/public\/assets\/support\/files\/8511\/en\/AJA_Manual_ControlRoom_v16.2.pdf","urlpath":"https:\/\/www.aja.com\/assets\/support\/files\/8511\/en\/","ext":"pdf"]],"l":["id":8511,"title":"AJA Control Room Manual v16.2","description":"AJA Control Room Manual v16.2\r\n","product":0,"category":"ma","platform_mac":1,"platform_pc":1,"platform_linux":1,"platform":"m","showdate":"y","released":"2\/3\/2022","url":"","ziparchive":false,"files":["file":"AJA_Manual_ControlRoom_v16.2.pdf","abspath":"\/var\/www\/sierra\/releases\/20230118233302\/webroot\/public\/assets\/support\/files\/8511\/en\/AJA_Manual_ControlRoom_v16.2.pdf","urlpath":"https:\/\/www.aja.com\/assets\/support\/files\/8511\/en\/","ext":"pdf"]],"mac_count":2,"pc_count":2,"linux_count":1,"load_count":5,"show_links":false,"m_load":1,"p_load":1,"l_load":1,"m_cached":true,"p_cached":true,"l_cached":true,"tag":"ct","title":"Compatability Tables","items":"m":[],"p":[],"l":[],"mac_count":0,"pc_count":0,"linux_count":0,"load_count":0,"show_links":false,"m_load":1,"p_load":1,"l_load":1,"m_cached":true,"p_cached":true,"l_cached":true,"tag":"cc","title":"Comparison Charts","items":"m":[],"p":[],"l":[],"mac_count":0,"pc_count":0,"linux_count":0,"load_count":0,"show_links":false,"m_load":1,"p_load":1,"l_load":1,"m_cached":true,"p_cached":true,"l_cached":true,"tag":"wp","title":"White Papers","items":"m":[],"p":[],"l":[],"mac_count":0,"pc_count":0,"linux_count":0,"load_count":0,"show_links":false,"m_load":1,"p_load":1,"l_load":1,"m_cached":true,"p_cached":true,"l_cached":true,"tag":"do","title":"Documents","items":"m":["id":9000,"title":"AJA OBS Quick Start Guide v16.2r1","description":"AJA OBS Quick Start Guide v16.2r1\r\n","product":0,"category":"do","platform_mac":1,"platform_pc":1,"platform_linux":1,"platform":"m","showdate":"y","released":"2\/7\/2023","url":"","ziparchive":false,"files":["file":"AJA_QSG_OBS_v16.2r1.pdf","abspath":"\/var\/www\/sierra\/releases\/20230118233302\/webroot\/public\/assets\/support\/files\/9000\/en\/AJA_QSG_OBS_v16.2r1.pdf","urlpath":"https:\/\/www.aja.com\/assets\/support\/files\/9000\/en\/","ext":"pdf"],"id":8731,"title":"AJA Thunderbolt Devices Quick Start Guide v16.2","description":"AJA Thunderbolt Devices Quick Start Guide v16.2\r\n","product":0,"category":"do","platform_mac":1,"platform_pc":1,"platform_linux":1,"platform":"m","showdate":"y","released":"6\/1\/2022","url":"","ziparchive":false,"files":["file":"AJA_QSG_Thunderbolt_v16.2.pdf","abspath":"\/var\/www\/sierra\/releases\/20230118233302\/webroot\/public\/assets\/support\/files\/8731\/en\/AJA_QSG_Thunderbolt_v16.2.pdf","urlpath":"https:\/\/www.aja.com\/assets\/support\/files\/8731\/en\/","ext":"pdf"],"id":8821,"title":"AJA Adobe Apps Quick Start Guide v16.2.2","description":"AJA Adobe Apps Quick Start Guide v16.2.2\r\n","product":0,"category":"do","platform_mac":1,"platform_pc":1,"platform_linux":0,"platform":"m","showdate":"y","released":"5\/25\/2022","url":"","ziparchive":false,"files":["file":"AJA_QSG_Adobe_Apps_v16.2.2.pdf","abspath":"\/var\/www\/sierra\/releases\/20230118233302\/webroot\/public\/assets\/support\/files\/8821\/en\/AJA_QSG_Adobe_Apps_v16.2.2.pdf","urlpath":"https:\/\/www.aja.com\/assets\/support\/files\/8821\/en\/","ext":"pdf"], DNxIV\r\n\t\r\n\t
                                                                                                                                            • \r\n\tAvid Artist ,"id":8523,"title":"AJA Apple Apps Quick Start Guide v16.2","description":"AJA Apple Apps Quick Start Guide v16.2\r\n","product":0,"category":"do","platform_mac":1,"platform_pc":0,"platform_linux":0,"platform":"m","showdate":"y","released":"2\/3\/2022","url":"","ziparchive":false,"files":["file":"AJA_QSG_Apple_Apps_v16.2.pdf","abspath":"\/var\/www\/sierra\/releases\/20230118233302\/webroot\/public\/assets\/support\/files\/8523\/en\/AJA_QSG_Apple_Apps_v16.2.pdf","urlpath":"https:\/\/www.aja.com\/assets\/support\/files\/8523\/en\/","ext":"pdf"],"id":7960,"title":"AJA macOS Big Sur Quick Start Guide v1.0","description":"AJA macOS Big Sur Quick Start Guide v1.0\r\n\r\nApple's macOS releases have increased their internal security requirements and settings. These changes affect the procedures used for updating AJA device software on Mac computers. A significant recent security difference is more stringent identification is required for application developers.\r\n\r\nFor Big Sur specifically, you will need to confirm AJA's new developer information by unlocking and relocking your System Preferences settings at the proper time. Please read this Quick Start Guide for best practices.\r\n","product":0,"category":"do","platform_mac":1,"platform_pc":0,"platform_linux":0,"platform":"m","showdate":"y","released":"3\/2\/2021","url":"","ziparchive":false,"files":["file":"AJA_BigSur_QSG_v1.0.pdf","abspath":"\/var\/www\/sierra\/releases\/20230118233302\/webroot\/public\/assets\/support\/files\/7960\/en\/AJA_BigSur_QSG_v1.0.pdf","urlpath":"https:\/\/www.aja.com\/assets\/support\/files\/7960\/en\/","ext":"pdf"],"id":7957,"title":"T-TAP Pro Video Output Formats Document","description":"","product":0,"category":"do","platform_mac":1,"platform_pc":1,"platform_linux":0,"platform":"m","showdate":"y","released":"3\/2\/2021","url":"","ziparchive":false,"files":["file":"T-TAP Pro Video Output Formats.pdf","abspath":"\/var\/www\/sierra\/releases\/20230118233302\/webroot\/public\/assets\/support\/files\/7957\/en\/T-TAP Pro Video Output Formats.pdf","urlpath":"https:\/\/www.aja.com\/assets\/support\/files\/7957\/en\/","ext":"pdf"]],"p":["id":9000,"title":"AJA OBS Quick Start Guide v16.2r1","description":"AJA OBS Quick Start Guide v16.2r1\r\n","product":0,"category":"do","platform_mac":1,"platform_pc":1,"platform_linux":1,"platform":"m","showdate":"y","released":"2\/7\/2023","url":"","ziparchive":false,"files":["file":"AJA_QSG_OBS_v16.2r1.pdf","abspath":"\/var\/www\/sierra\/releases\/20230118233302\/webroot\/public\/assets\/support\/files\/9000\/en\/AJA_QSG_OBS_v16.2r1.pdf","urlpath":"https:\/\/www.aja.com\/assets\/support\/files\/9000\/en\/","ext":"pdf"],"id":8731,"title":"AJA Thunderbolt Devices Quick Start Guide v16.2","description":"AJA Thunderbolt Devices Quick Start Guide v16.2\r\n","product":0,"category":"do","platform_mac":1,"platform_pc":1,"platform_linux":1,"platform":"m","showdate":"y","released":"6\/1\/2022","url":"","ziparchive":false,"files":["file":"AJA_QSG_Thunderbolt_v16.2.pdf","abspath":"\/var\/www\/sierra\/releases\/20230118233302\/webroot\/public\/assets\/support\/files\/8731\/en\/AJA_QSG_Thunderbolt_v16.2.pdf","urlpath":"https:\/\/www.aja.com\/assets\/support\/files\/8731\/en\/","ext":"pdf"],"id":8821,"title":"AJA Adobe Apps Quick Start Guide v16.2.2","description":"AJA Adobe Apps Quick Start Guide v16.2.2\r\n","product":0,"category":"do","platform_mac":1,"platform_pc":1,"platform_linux":0,"platform":"m","showdate":"y","released":"5\/25\/2022","url":"","ziparchive":false,"files":["file":"AJA_QSG_Adobe_Apps_v16.2.2.pdf","abspath":"\/var\/www\/sierra\/releases\/20230118233302\/webroot\/public\/assets\/support\/files\/8821\/en\/AJA_QSG_Adobe_Apps_v16.2.2.pdf","urlpath":"https:\/\/www.aja.com\/assets\/support\/files\/8821\/en\/","ext":"pdf"], DNxIP\r\n\t\r\n\r\n","product":0,"category":"do","platform_mac":1,"platform_pc":1,"platform_linux":0,"platform":"m","showdate":"y","released":"2\/3\/2022","url":"","ziparchive":false,"files":["file":"AJA_QSG_Telestream_Apps_v16.2.pdf","abspath":"\/var\/www\/sierra\/releases\/20230118233302\/webroot\/public\/assets\/support\/files\/8526\/en\/AJA_QSG_Telestream_Apps_v16.2.pdf","urlpath":"https:\/\/www.aja.com\/assets\/support\/files\/8526\/en\/","ext":"pdf"],"id":8361,"title":"AJA vMix Quick Start Guide v16.1","description":"AJA vMix Quick Start Guide v16.1:\r\n\r\nAJA I\/O devices support a broad range of creative software, including vMix, a complete 4K\/UltraHD or multi-channel\u00a0live video production application. This document gives you some general procedures for setting up AJA I\/O devices with vMix, including selecting live video inputs, adding audio sources, and sending your produced video out from vMix.\r\n","product":0,"category":"do","platform_mac":0,"platform_pc":1,"platform_linux":0,"platform":"m","showdate":"y","released":"7\/28\/2021","url":"","ziparchive":false,"files":["file":"AJA_QSG_vMix_v16.1.pdf","abspath":"\/var\/www\/sierra\/releases\/20230118233302\/webroot\/public\/assets\/support\/files\/8361\/en\/AJA_QSG_vMix_v16.1.pdf","urlpath":"https:\/\/www.aja.com\/assets\/support\/files\/8361\/en\/","ext":"pdf"],"id":7957,"title":"T-TAP Pro Video Output Formats Document","description":"","product":0,"category":"do","platform_mac":1,"platform_pc":1,"platform_linux":0,"platform":"m","showdate":"y","released":"3\/2\/2021","url":"","ziparchive":false,"files":["file":"T-TAP Pro Video Output Formats.pdf","abspath":"\/var\/www\/sierra\/releases\/20230118233302\/webroot\/public\/assets\/support\/files\/7957\/en\/T-TAP Pro Video Output Formats.pdf","urlpath":"https:\/\/www.aja.com\/assets\/support\/files\/7957\/en\/","ext":"pdf"]],"l":["id":9000,"title":"AJA OBS Quick Start Guide v16.2r1","description":"AJA OBS Quick Start Guide v16.2r1\r\n","product":0,"category":"do","platform_mac":1,"platform_pc":1,"platform_linux":1,"platform":"m","showdate":"y","released":"2\/7\/2023","url":"","ziparchive":false,"files":["file":"AJA_QSG_OBS_v16.2r1.pdf","abspath":"\/var\/www\/sierra\/releases\/20230118233302\/webroot\/public\/assets\/support\/files\/9000\/en\/AJA_QSG_OBS_v16.2r1.pdf","urlpath":"https:\/\/www.aja.com\/assets\/support\/files\/9000\/en\/","ext":"pdf"],"id":8731,"title":"AJA Thunderbolt Devices Quick Start Guide v16.2","description":"AJA Thunderbolt Devices Quick Start Guide v16.2\r\n","product":0,"category":"do","platform_mac":1,"platform_pc":1,"platform_linux":1,"platform":"m","showdate":"y","released":"6\/1\/2022","url":"","ziparchive":false,"files":["file":"AJA_QSG_Thunderbolt_v16.2.pdf","abspath":"\/var\/www\/sierra\/releases\/20230118233302\/webroot\/public\/assets\/support\/files\/8731\/en\/AJA_QSG_Thunderbolt_v16.2.pdf","urlpath":"https:\/\/www.aja.com\/assets\/support\/files\/8731\/en\/","ext":"pdf"]],"mac_count":7,"pc_count":6,"linux_count":2,"load_count":15,"show_links":false,"m_load":1,"p_load":1,"l_load":1,"m_cached":true,"p_cached":true,"l_cached":true,"tag":"fa","title":"FAQ","items":"m":["id":8053,"title":"What does T-TAP Pro do?","description":"T-TAP Pro outputs the highest quality video from your Thunderbolt\u2122 3 equipped computer, to a consumer display, projector or professional reference grade monitor, via a variety of different software applications. https:\/\/www.aja.com\/compatibility\/io\r\n","product":0,"category":"fa","platform_mac":1,"platform_pc":1,"platform_linux":1,"platform":"m","showdate":"y","released":"3\/8\/2021","url":"","ziparchive":false,"files":[],"id":8050,"title":"Why was T-TAP Pro created?","description":"T-TAP Pro is the evolution of AJA\u2019s proven T-TAP, featuring 12G-SDI and HDMI v2.0 output for working with up to 4K or UltraHD 60p video over a single cable. This allows for easy output of deep color, high frame rate, SDR and HDR video to a wide range of pro and consumer monitors and devices, all at the highest 10- and 12-bit quality. T-TAP Pro is a compact, silent device that delivers next generation mobile monitoring capabilities to any supported Thunderbolt\u2122 3 enabled system.\u00a0\r\n\r\n\u00a0\r\n","product":0,"category":"fa","platform_mac":1,"platform_pc":1,"platform_linux":1,"platform":"m","showdate":"y","released":"3\/8\/2021","url":"","ziparchive":false,"files":[],"id":8044,"title":"Who is T-TAP Pro designed for?","description":"Remote and facility-based Editors, Colorists, Audio Mixers, DITs, Visual Effects Artists, VJs, Live Streamers, Gamers and Developers using macOS, or Windows. https:\/\/www.aja.com\/compatibility\/io\r\n\r\n\u00a0\r\n","product":0,"category":"fa","platform_mac":1,"platform_pc":1,"platform_linux":1,"platform":"m","showdate":"y","released":"3\/8\/2021","url":"","ziparchive":false,"files":[],"id":8041,"title":"How do I know if T-TAP Pro is the right product for me?","description":"You require silent, high quality video and audio monitoring for software you are running on a Thunderbolt\u2122 3 equipped mac or PC. You work with SD, HD or 4K up to 60p and need the flexibility to output HDR via SDI and\/or HDMI (simultaneously) with support across Dolby Vision, HDR10 and HLG.\r\n\r\n\u00a0\r\n","product":0,"category":"fa","platform_mac":1,"platform_pc":1,"platform_linux":1,"platform":"m","showdate":"y","released":"3\/8\/2021","url":"","ziparchive":false,"files":[],"id":8038,"title":"Does T-TAP Pro support 4K\/UltraHD?","description":"Yes. T-TAP Pro can support 4K\/UltraHD output via one 12G-SDI connection and one HDMI 2.0 port simultaneously.\r\n\r\nYCbCr is supported up to 4:2:2, 10-bit, 60p\r\n\r\nRGB is supported up to 4:4:4, 12-bit, 30p\r\n\r\nhttps:\/\/www.aja.com\/products\/t-tap-pro#techspecs\r\n","product":0,"category":"fa","platform_mac":1,"platform_pc":1,"platform_linux":1,"platform":"m","showdate":"y","released":"3\/8\/2021","url":"","ziparchive":false,"files":[],"id":8035,"title":"Does T-TAP Pro support HDR?","description":"Yes. T-TAP Pro supports HDR via SDI and\/or HDMI (simultaneously) across Dolby Vision, HDR10 and HLG. Whatever type of HDR your software application supports, can be delivered to the screen with T-TAP Pro.\u00a0https:\/\/www.aja.com\/products\/t-tap-pro#techspecs\r\n","product":0,"category":"fa","platform_mac":1,"platform_pc":1,"platform_linux":1,"platform":"m","showdate":"y","released":"3\/8\/2021","url":"","ziparchive":false,"files":[],"id":8032,"title":"Does T-TAP Pro support ANC?","description":"Yes. ANC is supported, including Timecode and Closed Captioning (CC).\r\n","product":0,"category":"fa","platform_mac":1,"platform_pc":1,"platform_linux":1,"platform":"m","showdate":"y","released":"3\/8\/2021","url":"","ziparchive":false,"files":[],"id":8029,"title":"Does T-TAP Pro come with any software?","description":"T-TAP Pro is designed to work with AJA\u2019s unified desktop software package. This includes AJA Control Panel, AJA Control Room, AJA System Test, and AJA Multi-Channel Config for setting up Telestream Wirecast. AJA Control Room is used for playing back high quality files (ProRes, DNxHD\/HR) and supports HDR10 and HLG.\r\n","product":0,"category":"fa","platform_mac":1,"platform_pc":1,"platform_linux":1,"platform":"m","showdate":"y","released":"3\/8\/2021","url":"","ziparchive":false,"files":[],"id":8026,"title":"Does T-TAP Pro support 12G-SDI?","description":"Yes. T-TAP Pro supports 12G, 6G, 3G, 1.5G SDI signals for high quality monitoring, including HDR support.\r\n","product":0,"category":"fa","platform_mac":1,"platform_pc":1,"platform_linux":1,"platform":"m","showdate":"y","released":"3\/8\/2021","url":"","ziparchive":false,"files":[],"id":8023,"title":"Does T-TAP Pro support HDMI 2.0?","description":"Yes. T-TAP Pro provides a full-size HDMI 2.0 output for high quality monitoring, including HDR support.\r\n","product":0,"category":"fa","platform_mac":1,"platform_pc":1,"platform_linux":1,"platform":"m","showdate":"y","released":"3\/8\/2021","url":"","ziparchive":false,"files":[],"id":8020,"title":"What additional connectivity does T-TAP Pro have?","description":"T-TAP Pro has a dual channel\/stereo analog audio output, for headphone monitoring, on the front.\r\n","product":0,"category":"fa","platform_mac":1,"platform_pc":1,"platform_linux":1,"platform":"m","showdate":"y","released":"3\/8\/2021","url":"","ziparchive":false,"files":[],"id":8017,"title":"What are T-TAP Pro\u2019s Audio Monitoring options?","description":"SDI: 16-channel. HDMI: 8-channel. Analog (headphone): Dual-Channel\/Stereo.\r\n\r\nHDMI monitoring can be switched between outputting audio channels 1-8 or 9-16.\r\n\r\nDual Channel monitoring can be switched between any pair of audio channels 1-2 through 15-16.\r\n\r\nThe easy to reach front audio volume control defaults to headphone control only. Through AJA Control Panel, this physical control can also be assigned to overall baseband mix output. This is helpful in situations where you are using AJA Control Panel to mix together audio from a professional audio\/video application, and the desktop\/OS audio output. \u00a0\r\n","product":0,"category":"fa","platform_mac":1,"platform_pc":1,"platform_linux":1,"platform":"m","showdate":"y","released":"3\/8\/2021","url":"","ziparchive":false,"files":[],"id":8014,"title":"Is T-TAP Pro compatible with both macOS and Windows?","description":"Yes. T-TAP Pro can be used with macOS beginning with Big Sur (11.2), and continued support for Windows 10 and updates.\r\n\r\n\u00a0\r\n","product":0,"category":"fa","platform_mac":1,"platform_pc":1,"platform_linux":1,"platform":"m","showdate":"y","released":"3\/8\/2021","url":"","ziparchive":false,"files":[],"id":8011,"title":"How do I navigate macOS security dialogs?","description":"Always check the release notes where you will find advice on different macOS versions. For macOS, Big Sur installations see https:\/\/www.aja.com\/support\/item\/7960.\r\n\r\n\u00a0\r\n","product":0,"category":"fa","platform_mac":1,"platform_pc":1,"platform_linux":1,"platform":"m","showdate":"y","released":"3\/8\/2021","url":"","ziparchive":false,"files":[],"id":8008,"title":"Is T-TAP Pro compatible with Mac and PC laptops, minis and workstations?","description":"Yes. T-TAP Pro can be connected to the host system via a Thunderbolt\u2122 3 port on a laptop, mini or workstation. Alternatively, connect T-TAP Pro to workstations via a Thunderbolt\u2122 3 PCIe card. For examples, see:\r\n\r\nhttps:\/\/www.aja.com\/products\/thunderbolt-laptop\r\n\r\nhttps:\/\/www.aja.com\/products\/thunderbolt-desktop\r\n","product":0,"category":"fa","platform_mac":1,"platform_pc":1,"platform_linux":1,"platform":"m","showdate":"y","released":"3\/8\/2021","url":"","ziparchive":false,"files":[],"id":8005,"title":"Is T-TAP Pro compatible with industry standard NLEs such as Adobe Premiere Pro, Apple FCP, and Avid Media Composer?","description":"Yes. T-TAP Pro comes with out-of-the-box support for Adobe Applications, Apple FCP, Avid Media Composer, and much more. Visit our software compatibility pages to find out more, https:\/\/www.aja.com\/compatibility\/io.\r\n","product":0,"category":"fa","platform_mac":1,"platform_pc":1,"platform_linux":1,"platform":"m","showdate":"y","released":"3\/8\/2021","url":"","ziparchive":false,"files":[],"id":8002,"title":"Is T-TAP Pro upgradeable over time?","description":"Yes. T-TAP Pro can be updated with new features and enhancements by downloading and installing the latest version of AJA\u2019s unified desktop software package.\r\n\r\n\u00a0\r\n","product":0,"category":"fa","platform_mac":1,"platform_pc":1,"platform_linux":1,"platform":"m","showdate":"y","released":"3\/8\/2021","url":"","ziparchive":false,"files":[],"id":7999,"title":"Can T-TAP Pro be used on a Thunderbolt\u2122 2 host system?","description":"Yes. Although, it should be noted that using a Thunderbolt\u2122 2 host will restrict what is possible, since T-TAP Pro is designed to use the full bandwidth of Thunderbolt 3.\r\n\r\n\u00a0\r\n","product":0,"category":"fa","platform_mac":1,"platform_pc":1,"platform_linux":1,"platform":"m","showdate":"y","released":"3\/8\/2021","url":"","ziparchive":false,"files":[],"id":7996,"title":"How is power provided to T-TAP Pro?","description":"T-TAP Pro uses external power. This means T-TAP Pro can be used on a Thunderbolt\u2122 2 host system by use of a quality Thunderbolt 2 to Thunderbolt 3 adapter. It should be noted, using a Thunderbolt 2 host will restrict what is possible, since T-TAP Pro is designed to use the full bandwidth of Thunderbolt 3.\r\n\r\n\u00a0\r\n","product":0,"category":"fa","platform_mac":1,"platform_pc":1,"platform_linux":1,"platform":"m","showdate":"y","released":"3\/8\/2021","url":"","ziparchive":false,"files":[],"id":7126,"title":"After installing AJA Desktop Software on Mac running High Sierra (or above), AJA Control Panel is reporting \u201cUnsupported AJA Device.\u201d How do I fix that?","description":"Important macOS Mojave installation information\r\n\r\n\u2022 With the introduction of macOS Mojave (v10.14), Apple now requests that third party kernel extensions be \u201cnotarized\u201d by Apple. AJA\u2019s device driver kernel extension has been notarized starting with AJA Software version 15.2. Apple has indicated that for security purposes, future macOS versions may prevent the loading of unnotarized device drivers, in which case, AJA software prior to v15.2 will result in AJA devices failing to operate (Unsupported AJA Device)\r\n\r\nImportant macOS High Sierra installation information\r\n\r\n\u2022 With the introduction of macOS High Sierra (v10.13), Apple now requires that third party application developers be identified during kernel extension installations. Failure to do so will make AJA devices fail to operate (Unsupported AJA Device).\r\n\r\n\u2022 Depending on your macOS version and AJA Desktop Software installation history, the following installation outcomes are possible:\r\n\r\n\u25e6 macOS Sierra and Earlier Supported Versions\r\n\r\n- No problems exist for AJA software installation or updates with these earlier versions of macOS.\r\n\r\n\u25e6 Earlier macOS to High Sierra Update\r\n\r\n- No problems should occur if the AJA Desktop Software package is already installed on macOS Sierra (or earlier) and then update your Mac to High Sierra. The identification of AJA as a trusted developer is \u201cgrandfathered in\u201d to High Sierra.\r\n\r\n\u25e6 macOS High Sierra First AJA Desktop Software Install\r\n\r\n- No problems should occur if your Mac is running High Sierra macOS, you install AJA Desktop Software for the first time, and you follow the instructions shown during installation (Figure 1).\r\n\r\n- Do NOT click OK (Figure 1).\u00a0\r\n\r\nFigure 1 \u2014 First-time installation on macOS High Sierra\r\n\r\n\r\n\r\nInstead, click Open Security Preferences (or go to System Preferences>Security and Privacy) and then click Allow for AJA Video Systems (Figure 2).\r\n\r\n\r\n\r\nFigure 2 \u2014 click Allow for AJA Video Systems\u00a0\r\n\r\nNOTE: If after successful installation, you remove AJA applications from your computer using the AJA Uninstaller developer identification is retained by macOS High Sierra. Reinstallation should proceed without any problems. \r\n\r\n\u2022 Recovery from Installation Approval Failure:\r\n\r\n\u25e6 If you clicked OK during installation and skipped the developer approval step, the AJA Desktop Software installation will complete, but macOS\u2019 GateKeeper security will prevent AJA\u2019s device driver from loading, and AJA devices will not operate (Unsupported AJA Device).\r\n\r\n\u25e6 Apple has allowed a time window, within which you can belatedly approve a developer after installation. If you go to System Preferences>Security and Privacy, within 30 minutes of installation, the developer message and Allow button will be available for use. After 30 minutes, however, the message and button are removed from the window. Recovery from this involves uninstalling all AJA files (some manually), reinstalling the AJA Desktop Software package, and clicking on Allow for AJA Video Systems developer.\u00a0\r\n\r\n\u2022 Recovery from Installation Approval Failure (2nd Level if necessary)\r\n\r\n1. Run the AJA Uninstaller, located in the AJA Utilities folder in the Mac Applications folder.\r\n\r\n2. Access the Users Library, which is hidden. To access the library:\r\n\r\n3. Go to the Finder.\r\n\r\n- In the Finder Menu Bar and click on Go.\r\n\r\n- Hold down the Option key. The Library folder appears as long as the Option key is held down.\r\n\r\n- Go to Library>Preferences and delete all com.aja.*.* files. There may be one file or several files.\r\n\r\n4. Remove AJA Control Panel from the Dock, if applicable.\r\n\r\n5. Restart the Mac.\r\n\r\n6. Install the AJA Desktop Software package.\r\n\r\n7. During installation, click on Open Security Preferences (or go to System Preferences,>Security and Privacy).\r\n\r\n8. Under the General tab, click on the Allow button for AJA Video Systems. The button is only available for 30 minutes.\r\n","product":0,"category":"fa","platform_mac":1,"platform_pc":1,"platform_linux":0,"platform":"m","showdate":"y","released":"12\/14\/2018","url":"","ziparchive":false,"files":[]],"p":["id":8053,"title":"What does T-TAP Pro do?","description":"T-TAP Pro outputs the highest quality video from your Thunderbolt\u2122 3 equipped computer, to a consumer display, projector or professional reference grade monitor, via a variety of different software applications. https:\/\/www.aja.com\/compatibility\/io\r\n","product":0,"category":"fa","platform_mac":1,"platform_pc":1,"platform_linux":1,"platform":"m","showdate":"y","released":"3\/8\/2021","url":"","ziparchive":false,"files":[],"id":8050,"title":"Why was T-TAP Pro created?","description":"T-TAP Pro is the evolution of AJA\u2019s proven T-TAP, featuring 12G-SDI and HDMI v2.0 output for working with up to 4K or UltraHD 60p video over a single cable. This allows for easy output of deep color, high frame rate, SDR and HDR video to a wide range of pro and consumer monitors and devices, all at the highest 10- and 12-bit quality. T-TAP Pro is a compact, silent device that delivers next generation mobile monitoring capabilities to any supported Thunderbolt\u2122 3 enabled system.\u00a0\r\n\r\n\u00a0\r\n","product":0,"category":"fa","platform_mac":1,"platform_pc":1,"platform_linux":1,"platform":"m","showdate":"y","released":"3\/8\/2021","url":"","ziparchive":false,"files":[],"id":8044,"title":"Who is T-TAP Pro designed for?","description":"Remote and facility-based Editors, Colorists, Audio Mixers, DITs, Visual Effects Artists, VJs, Live Streamers, Gamers and Developers using macOS, or Windows. https:\/\/www.aja.com\/compatibility\/io\r\n\r\n\u00a0\r\n","product":0,"category":"fa","platform_mac":1,"platform_pc":1,"platform_linux":1,"platform":"m","showdate":"y","released":"3\/8\/2021","url":"","ziparchive":false,"files":[],"id":8041,"title":"How do I know if T-TAP Pro is the right product for me?","description":"You require silent, high quality video and audio monitoring for software you are running on a Thunderbolt\u2122 3 equipped mac or PC. You work with SD, HD or 4K up to 60p and need the flexibility to output HDR via SDI and\/or HDMI (simultaneously) with support across Dolby Vision, HDR10 and HLG.\r\n\r\n\u00a0\r\n","product":0,"category":"fa","platform_mac":1,"platform_pc":1,"platform_linux":1,"platform":"m","showdate":"y","released":"3\/8\/2021","url":"","ziparchive":false,"files":[],"id":8038,"title":"Does T-TAP Pro support 4K\/UltraHD?","description":"Yes. T-TAP Pro can support 4K\/UltraHD output via one 12G-SDI connection and one HDMI 2.0 port simultaneously.\r\n\r\nYCbCr is supported up to 4:2:2, 10-bit, 60p\r\n\r\nRGB is supported up to 4:4:4, 12-bit, 30p\r\n\r\nhttps:\/\/www.aja.com\/products\/t-tap-pro#techspecs\r\n","product":0,"category":"fa","platform_mac":1,"platform_pc":1,"platform_linux":1,"platform":"m","showdate":"y","released":"3\/8\/2021","url":"","ziparchive":false,"files":[],"id":8035,"title":"Does T-TAP Pro support HDR?","description":"Yes. T-TAP Pro supports HDR via SDI and\/or HDMI (simultaneously) across Dolby Vision, HDR10 and HLG. Whatever type of HDR your software application supports, can be delivered to the screen with T-TAP Pro.\u00a0https:\/\/www.aja.com\/products\/t-tap-pro#techspecs\r\n","product":0,"category":"fa","platform_mac":1,"platform_pc":1,"platform_linux":1,"platform":"m","showdate":"y","released":"3\/8\/2021","url":"","ziparchive":false,"files":[],"id":8032,"title":"Does T-TAP Pro support ANC?","description":"Yes. ANC is supported, including Timecode and Closed Captioning (CC).\r\n","product":0,"category":"fa","platform_mac":1,"platform_pc":1,"platform_linux":1,"platform":"m","showdate":"y","released":"3\/8\/2021","url":"","ziparchive":false,"files":[],"id":8029,"title":"Does T-TAP Pro come with any software?","description":"T-TAP Pro is designed to work with AJA\u2019s unified desktop software package. This includes AJA Control Panel, AJA Control Room, AJA System Test, and AJA Multi-Channel Config for setting up Telestream Wirecast. AJA Control Room is used for playing back high quality files (ProRes, DNxHD\/HR) and supports HDR10 and HLG.\r\n","product":0,"category":"fa","platform_mac":1,"platform_pc":1,"platform_linux":1,"platform":"m","showdate":"y","released":"3\/8\/2021","url":"","ziparchive":false,"files":[],"id":8026,"title":"Does T-TAP Pro support 12G-SDI?","description":"Yes. T-TAP Pro supports 12G, 6G, 3G, 1.5G SDI signals for high quality monitoring, including HDR support.\r\n","product":0,"category":"fa","platform_mac":1,"platform_pc":1,"platform_linux":1,"platform":"m","showdate":"y","released":"3\/8\/2021","url":"","ziparchive":false,"files":[],"id":8023,"title":"Does T-TAP Pro support HDMI 2.0?","description":"Yes. T-TAP Pro provides a full-size HDMI 2.0 output for high quality monitoring, including HDR support.\r\n","product":0,"category":"fa","platform_mac":1,"platform_pc":1,"platform_linux":1,"platform":"m","showdate":"y","released":"3\/8\/2021","url":"","ziparchive":false,"files":[],"id":8020,"title":"What additional connectivity does T-TAP Pro have?","description":"T-TAP Pro has a dual channel\/stereo analog audio output, for headphone monitoring, on the front.\r\n","product":0,"category":"fa","platform_mac":1,"platform_pc":1,"platform_linux":1,"platform":"m","showdate":"y","released":"3\/8\/2021","url":"","ziparchive":false,"files":[],"id":8017,"title":"What are T-TAP Pro\u2019s Audio Monitoring options?","description":"SDI: 16-channel. HDMI: 8-channel. Analog (headphone): Dual-Channel\/Stereo.\r\n\r\nHDMI monitoring can be switched between outputting audio channels 1-8 or 9-16.\r\n\r\nDual Channel monitoring can be switched between any pair of audio channels 1-2 through 15-16.\r\n\r\nThe easy to reach front audio volume control defaults to headphone control only. Through AJA Control Panel, this physical control can also be assigned to overall baseband mix output. This is helpful in situations where you are using AJA Control Panel to mix together audio from a professional audio\/video application, and the desktop\/OS audio output. \u00a0\r\n","product":0,"category":"fa","platform_mac":1,"platform_pc":1,"platform_linux":1,"platform":"m","showdate":"y","released":"3\/8\/2021","url":"","ziparchive":false,"files":[],"id":8014,"title":"Is T-TAP Pro compatible with both macOS and Windows?","description":"Yes. T-TAP Pro can be used with macOS beginning with Big Sur (11.2), and continued support for Windows 10 and updates.\r\n\r\n\u00a0\r\n","product":0,"category":"fa","platform_mac":1,"platform_pc":1,"platform_linux":1,"platform":"m","showdate":"y","released":"3\/8\/2021","url":"","ziparchive":false,"files":[],"id":8011,"title":"How do I navigate macOS security dialogs?","description":"Always check the release notes where you will find advice on different macOS versions. For macOS, Big Sur installations see https:\/\/www.aja.com\/support\/item\/7960.\r\n\r\n\u00a0\r\n","product":0,"category":"fa","platform_mac":1,"platform_pc":1,"platform_linux":1,"platform":"m","showdate":"y","released":"3\/8\/2021","url":"","ziparchive":false,"files":[],"id":8008,"title":"Is T-TAP Pro compatible with Mac and PC laptops, minis and workstations?","description":"Yes. T-TAP Pro can be connected to the host system via a Thunderbolt\u2122 3 port on a laptop, mini or workstation. Alternatively, connect T-TAP Pro to workstations via a Thunderbolt\u2122 3 PCIe card. For examples, see:\r\n\r\nhttps:\/\/www.aja.com\/products\/thunderbolt-laptop\r\n\r\nhttps:\/\/www.aja.com\/products\/thunderbolt-desktop\r\n","product":0,"category":"fa","platform_mac":1,"platform_pc":1,"platform_linux":1,"platform":"m","showdate":"y","released":"3\/8\/2021","url":"","ziparchive":false,"files":[],"id":8005,"title":"Is T-TAP Pro compatible with industry standard NLEs such as Adobe Premiere Pro, Apple FCP, and Avid Media Composer?","description":"Yes. T-TAP Pro comes with out-of-the-box support for Adobe Applications, Apple FCP, Avid Media Composer, and much more. Visit our software compatibility pages to find out more, https:\/\/www.aja.com\/compatibility\/io.\r\n","product":0,"category":"fa","platform_mac":1,"platform_pc":1,"platform_linux":1,"platform":"m","showdate":"y","released":"3\/8\/2021","url":"","ziparchive":false,"files":[],"id":8002,"title":"Is T-TAP Pro upgradeable over time?","description":"Yes. T-TAP Pro can be updated with new features and enhancements by downloading and installing the latest version of AJA\u2019s unified desktop software package.\r\n\r\n\u00a0\r\n","product":0,"category":"fa","platform_mac":1,"platform_pc":1,"platform_linux":1,"platform":"m","showdate":"y","released":"3\/8\/2021","url":"","ziparchive":false,"files":[],"id":7999,"title":"Can T-TAP Pro be used on a Thunderbolt\u2122 2 host system?","description":"Yes. Although, it should be noted that using a Thunderbolt\u2122 2 host will restrict what is possible, since T-TAP Pro is designed to use the full bandwidth of Thunderbolt 3.\r\n\r\n\u00a0\r\n","product":0,"category":"fa","platform_mac":1,"platform_pc":1,"platform_linux":1,"platform":"m","showdate":"y","released":"3\/8\/2021","url":"","ziparchive":false,"files":[],"id":7996,"title":"How is power provided to T-TAP Pro?","description":"T-TAP Pro uses external power. This means T-TAP Pro can be used on a Thunderbolt\u2122 2 host system by use of a quality Thunderbolt 2 to Thunderbolt 3 adapter. It should be noted, using a Thunderbolt 2 host will restrict what is possible, since T-TAP Pro is designed to use the full bandwidth of Thunderbolt 3.\r\n\r\n\u00a0\r\n","product":0,"category":"fa","platform_mac":1,"platform_pc":1,"platform_linux":1,"platform":"m","showdate":"y","released":"3\/8\/2021","url":"","ziparchive":false,"files":[],"id":7126,"title":"After installing AJA Desktop Software on Mac running High Sierra (or above), AJA Control Panel is reporting \u201cUnsupported AJA Device.\u201d How do I fix that?","description":"Important macOS Mojave installation information\r\n\r\n\u2022 With the introduction of macOS Mojave (v10.14), Apple now requests that third party kernel extensions be \u201cnotarized\u201d by Apple. AJA\u2019s device driver kernel extension has been notarized starting with AJA Software version 15.2. Apple has indicated that for security purposes, future macOS versions may prevent the loading of unnotarized device drivers, in which case, AJA software prior to v15.2 will result in AJA devices failing to operate (Unsupported AJA Device)\r\n\r\nImportant macOS High Sierra installation information\r\n\r\n\u2022 With the introduction of macOS High Sierra (v10.13), Apple now requires that third party application developers be identified during kernel extension installations. Failure to do so will make AJA devices fail to operate (Unsupported AJA Device).\r\n\r\n\u2022 Depending on your macOS version and AJA Desktop Software installation history, the following installation outcomes are possible:\r\n\r\n\u25e6 macOS Sierra and Earlier Supported Versions\r\n\r\n- No problems exist for AJA software installation or updates with these earlier versions of macOS.\r\n\r\n\u25e6 Earlier macOS to High Sierra Update\r\n\r\n- No problems should occur if the AJA Desktop Software package is already installed on macOS Sierra (or earlier) and then update your Mac to High Sierra. The identification of AJA as a trusted developer is \u201cgrandfathered in\u201d to High Sierra.\r\n\r\n\u25e6 macOS High Sierra First AJA Desktop Software Install\r\n\r\n- No problems should occur if your Mac is running High Sierra macOS, you install AJA Desktop Software for the first time, and you follow the instructions shown during installation (Figure 1).\r\n\r\n- Do NOT click OK (Figure 1).\u00a0\r\n\r\nFigure 1 \u2014 First-time installation on macOS High Sierra\r\n\r\n\r\n\r\nInstead, click Open Security Preferences (or go to System Preferences>Security and Privacy) and then click Allow for AJA Video Systems (Figure 2).\r\n\r\n\r\n\r\nFigure 2 \u2014 click Allow for AJA Video Systems\u00a0\r\n\r\nNOTE: If after successful installation, you remove AJA applications from your computer using the AJA Uninstaller developer identification is retained by macOS High Sierra. Reinstallation should proceed without any problems. \r\n\r\n\u2022 Recovery from Installation Approval Failure:\r\n\r\n\u25e6 If you clicked OK during installation and skipped the developer approval step, the AJA Desktop Software installation will complete, but macOS\u2019 GateKeeper security will prevent AJA\u2019s device driver from loading, and AJA devices will not operate (Unsupported AJA Device).\r\n\r\n\u25e6 Apple has allowed a time window, within which you can belatedly approve a developer after installation. If you go to System Preferences>Security and Privacy, within 30 minutes of installation, the developer message and Allow button will be available for use. After 30 minutes, however, the message and button are removed from the window. Recovery from this involves uninstalling all AJA files (some manually), reinstalling the AJA Desktop Software package, and clicking on Allow for AJA Video Systems developer.\u00a0\r\n\r\n\u2022 Recovery from Installation Approval Failure (2nd Level if necessary)\r\n\r\n1. Run the AJA Uninstaller, located in the AJA Utilities folder in the Mac Applications folder.\r\n\r\n2. Access the Users Library, which is hidden. To access the library:\r\n\r\n3. Go to the Finder.\r\n\r\n- In the Finder Menu Bar and click on Go.\r\n\r\n- Hold down the Option key. The Library folder appears as long as the Option key is held down.\r\n\r\n- Go to Library>Preferences and delete all com.aja.*.* files. There may be one file or several files.\r\n\r\n4. Remove AJA Control Panel from the Dock, if applicable.\r\n\r\n5. Restart the Mac.\r\n\r\n6. Install the AJA Desktop Software package.\r\n\r\n7. During installation, click on Open Security Preferences (or go to System Preferences,>Security and Privacy).\r\n\r\n8. Under the General tab, click on the Allow button for AJA Video Systems. The button is only available for 30 minutes.\r\n","product":0,"category":"fa","platform_mac":1,"platform_pc":1,"platform_linux":0,"platform":"m","showdate":"y","released":"12\/14\/2018","url":"","ziparchive":false,"files":[]],"l":["id":8053,"title":"What does T-TAP Pro do?","description":"T-TAP Pro outputs the highest quality video from your Thunderbolt\u2122 3 equipped computer, to a consumer display, projector or professional reference grade monitor, via a variety of different software applications. https:\/\/www.aja.com\/compatibility\/io\r\n","product":0,"category":"fa","platform_mac":1,"platform_pc":1,"platform_linux":1,"platform":"m","showdate":"y","released":"3\/8\/2021","url":"","ziparchive":false,"files":[],"id":8050,"title":"Why was T-TAP Pro created?","description":"T-TAP Pro is the evolution of AJA\u2019s proven T-TAP, featuring 12G-SDI and HDMI v2.0 output for working with up to 4K or UltraHD 60p video over a single cable. This allows for easy output of deep color, high frame rate, SDR and HDR video to a wide range of pro and consumer monitors and devices, all at the highest 10- and 12-bit quality. T-TAP Pro is a compact, silent device that delivers next generation mobile monitoring capabilities to any supported Thunderbolt\u2122 3 enabled system.\u00a0\r\n\r\n\u00a0\r\n","product":0,"category":"fa","platform_mac":1,"platform_pc":1,"platform_linux":1,"platform":"m","showdate":"y","released":"3\/8\/2021","url":"","ziparchive":false,"files":[],"id":8044,"title":"Who is T-TAP Pro designed for?","description":"Remote and facility-based Editors, Colorists, Audio Mixers, DITs, Visual Effects Artists, VJs, Live Streamers, Gamers and Developers using macOS, or Windows. https:\/\/www.aja.com\/compatibility\/io\r\n\r\n\u00a0\r\n","product":0,"category":"fa","platform_mac":1,"platform_pc":1,"platform_linux":1,"platform":"m","showdate":"y","released":"3\/8\/2021","url":"","ziparchive":false,"files":[],"id":8041,"title":"How do I know if T-TAP Pro is the right product for me?","description":"You require silent, high quality video and audio monitoring for software you are running on a Thunderbolt\u2122 3 equipped mac or PC. You work with SD, HD or 4K up to 60p and need the flexibility to output HDR via SDI and\/or HDMI (simultaneously) with support across Dolby Vision, HDR10 and HLG.\r\n\r\n\u00a0\r\n","product":0,"category":"fa","platform_mac":1,"platform_pc":1,"platform_linux":1,"platform":"m","showdate":"y","released":"3\/8\/2021","url":"","ziparchive":false,"files":[],"id":8038,"title":"Does T-TAP Pro support 4K\/UltraHD?","description":"Yes. T-TAP Pro can support 4K\/UltraHD output via one 12G-SDI connection and one HDMI 2.0 port simultaneously.\r\n\r\nYCbCr is supported up to 4:2:2, 10-bit, 60p\r\n\r\nRGB is supported up to 4:4:4, 12-bit, 30p\r\n\r\nhttps:\/\/www.aja.com\/products\/t-tap-pro#techspecs\r\n","product":0,"category":"fa","platform_mac":1,"platform_pc":1,"platform_linux":1,"platform":"m","showdate":"y","released":"3\/8\/2021","url":"","ziparchive":false,"files":[],"id":8035,"title":"Does T-TAP Pro support HDR?","description":"Yes. T-TAP Pro supports HDR via SDI and\/or HDMI (simultaneously) across Dolby Vision, HDR10 and HLG. Whatever type of HDR your software application supports, can be delivered to the screen with T-TAP Pro.\u00a0https:\/\/www.aja.com\/products\/t-tap-pro#techspecs\r\n","product":0,"category":"fa","platform_mac":1,"platform_pc":1,"platform_linux":1,"platform":"m","showdate":"y","released":"3\/8\/2021","url":"","ziparchive":false,"files":[],"id":8032,"title":"Does T-TAP Pro support ANC?","description":"Yes. ANC is supported, including Timecode and Closed Captioning (CC).\r\n","product":0,"category":"fa","platform_mac":1,"platform_pc":1,"platform_linux":1,"platform":"m","showdate":"y","released":"3\/8\/2021","url":"","ziparchive":false,"files":[],"id":8029,"title":"Does T-TAP Pro come with any software?","description":"T-TAP Pro is designed to work with AJA\u2019s unified desktop software package. This includes AJA Control Panel, AJA Control Room, AJA System Test, and AJA Multi-Channel Config for setting up Telestream Wirecast. AJA Control Room is used for playing back high quality files (ProRes, DNxHD\/HR) and supports HDR10 and HLG.\r\n","product":0,"category":"fa","platform_mac":1,"platform_pc":1,"platform_linux":1,"platform":"m","showdate":"y","released":"3\/8\/2021","url":"","ziparchive":false,"files":[],"id":8026,"title":"Does T-TAP Pro support 12G-SDI?","description":"Yes. T-TAP Pro supports 12G, 6G, 3G, 1.5G SDI signals for high quality monitoring, including HDR support.\r\n","product":0,"category":"fa","platform_mac":1,"platform_pc":1,"platform_linux":1,"platform":"m","showdate":"y","released":"3\/8\/2021","url":"","ziparchive":false,"files":[],"id":8023,"title":"Does T-TAP Pro support HDMI 2.0?","description":"Yes. T-TAP Pro provides a full-size HDMI 2.0 output for high quality monitoring, including HDR support.\r\n","product":0,"category":"fa","platform_mac":1,"platform_pc":1,"platform_linux":1,"platform":"m","showdate":"y","released":"3\/8\/2021","url":"","ziparchive":false,"files":[],"id":8020,"title":"What additional connectivity does T-TAP Pro have?","description":"T-TAP Pro has a dual channel\/stereo analog audio output, for headphone monitoring, on the front.\r\n","product":0,"category":"fa","platform_mac":1,"platform_pc":1,"platform_linux":1,"platform":"m","showdate":"y","released":"3\/8\/2021","url":"","ziparchive":false,"files":[],"id":8017,"title":"What are T-TAP Pro\u2019s Audio Monitoring options?","description":"SDI: 16-channel. HDMI: 8-channel. Analog (headphone): Dual-Channel\/Stereo.\r\n\r\nHDMI monitoring can be switched between outputting audio channels 1-8 or 9-16.\r\n\r\nDual Channel monitoring can be switched between any pair of audio channels 1-2 through 15-16.\r\n\r\nThe easy to reach front audio volume control defaults to headphone control only. Through AJA Control Panel, this physical control can also be assigned to overall baseband mix output. This is helpful in situations where you are using AJA Control Panel to mix together audio from a professional audio\/video application, and the desktop\/OS audio output. \u00a0\r\n","product":0,"category":"fa","platform_mac":1,"platform_pc":1,"platform_linux":1,"platform":"m","showdate":"y","released":"3\/8\/2021","url":"","ziparchive":false,"files":[],"id":8014,"title":"Is T-TAP Pro compatible with both macOS and Windows?","description":"Yes. T-TAP Pro can be used with macOS beginning with Big Sur (11.2), and continued support for Windows 10 and updates.\r\n\r\n\u00a0\r\n","product":0,"category":"fa","platform_mac":1,"platform_pc":1,"platform_linux":1,"platform":"m","showdate":"y","released":"3\/8\/2021","url":"","ziparchive":false,"files":[],"id":8011,"title":"How do I navigate macOS security dialogs?","description":"Always check the release notes where you will find advice on different macOS versions. For macOS, Big Sur installations see https:\/\/www.aja.com\/support\/item\/7960.\r\n\r\n\u00a0\r\n","product":0,"category":"fa","platform_mac":1,"platform_pc":1,"platform_linux":1,"platform":"m","showdate":"y","released":"3\/8\/2021","url":"","ziparchive":false,"files":[],"id":8008,"title":"Is T-TAP Pro compatible with Mac and PC laptops, minis and workstations?","description":"Yes. T-TAP Pro can be connected to the host system via a Thunderbolt\u2122 3 port on a laptop, mini or workstation. Alternatively, connect T-TAP Pro to workstations via a Thunderbolt\u2122 3 PCIe card. For examples, see:\r\n\r\nhttps:\/\/www.aja.com\/products\/thunderbolt-laptop\r\n\r\nhttps:\/\/www.aja.com\/products\/thunderbolt-desktop\r\n","product":0,"category":"fa","platform_mac":1,"platform_pc":1,"platform_linux":1,"platform":"m","showdate":"y","released":"3\/8\/2021","url":"","ziparchive":false,"files":[],"id":8005,"title":"Is T-TAP Pro compatible with industry standard NLEs such as Adobe Premiere Pro, Apple FCP, and Avid Media Composer?","description":"Yes. T-TAP Pro comes with out-of-the-box support for Adobe Applications, Apple FCP, Avid Media Composer, and much more. Visit our software compatibility pages to find out more, https:\/\/www.aja.com\/compatibility\/io.\r\n","product":0,"category":"fa","platform_mac":1,"platform_pc":1,"platform_linux":1,"platform":"m","showdate":"y","released":"3\/8\/2021","url":"","ziparchive":false,"files":[],"id":8002,"title":"Is T-TAP Pro upgradeable over time?","description":"Yes. T-TAP Pro can be updated with new features and enhancements by downloading and installing the latest version of AJA\u2019s unified desktop software package.\r\n\r\n\u00a0\r\n","product":0,"category":"fa","platform_mac":1,"platform_pc":1,"platform_linux":1,"platform":"m","showdate":"y","released":"3\/8\/2021","url":"","ziparchive":false,"files":[],"id":7999,"title":"Can T-TAP Pro be used on a Thunderbolt\u2122 2 host system?","description":"Yes. Although, it should be noted that using a Thunderbolt\u2122 2 host will restrict what is possible, since T-TAP Pro is designed to use the full bandwidth of Thunderbolt 3.\r\n\r\n\u00a0\r\n","product":0,"category":"fa","platform_mac":1,"platform_pc":1,"platform_linux":1,"platform":"m","showdate":"y","released":"3\/8\/2021","url":"","ziparchive":false,"files":[],"id":7996,"title":"How is power provided to T-TAP Pro?","description":"T-TAP Pro uses external power. This means T-TAP Pro can be used on a Thunderbolt\u2122 2 host system by use of a quality Thunderbolt 2 to Thunderbolt 3 adapter. It should be noted, using a Thunderbolt 2 host will restrict what is possible, since T-TAP Pro is designed to use the full bandwidth of Thunderbolt 3.\r\n\r\n\u00a0\r\n","product":0,"category":"fa","platform_mac":1,"platform_pc":1,"platform_linux":1,"platform":"m","showdate":"y","released":"3\/8\/2021","url":"","ziparchive":false,"files":[]],"mac_count":20,"pc_count":20,"linux_count":19,"load_count":59,"show_links":false,"m_load":1,"p_load":1,"l_load":1,"m_cached":true,"p_cached":true,"l_cached":true],"title":"T-TAP® Pro","platform":"m","category":"","tag":"t-tap-pro","dlcount":188,"keywords":"","lang":"en","num":0,"term":"","dynamic":true,"itemsperload":15,"threshold":20}; Contact Support 180 Litton Drive, Grass Valley, CA 95945 USA
                                                                                                                                              Phone: +1-530-271-3190, Fax: +1-530-271-3140
                                                                                                                                              Email: support@aja.com

                                                                                                                                              aaccfb2cb3
                                                                                                                                              -
                                                                                                                                              -
                                                                                                                                              \ No newline at end of file diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/cffLib/width.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/cffLib/width.py deleted file mode 100644 index c0a746b6922d4c66d0559078457c9546c77c65d3..0000000000000000000000000000000000000000 --- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/cffLib/width.py +++ /dev/null @@ -1,209 +0,0 @@ -# -*- coding: utf-8 -*- - -"""T2CharString glyph width optimizer. - -CFF glyphs whose width equals the CFF Private dictionary's ``defaultWidthX`` -value do not need to specify their width in their charstring, saving bytes. -This module determines the optimum ``defaultWidthX`` and ``nominalWidthX`` -values for a font, when provided with a list of glyph widths.""" - -from fontTools.ttLib import TTFont -from collections import defaultdict -from operator import add -from functools import reduce - - -class missingdict(dict): - def __init__(self, missing_func): - self.missing_func = missing_func - - def __missing__(self, v): - return self.missing_func(v) - - -def cumSum(f, op=add, start=0, decreasing=False): - - keys = sorted(f.keys()) - minx, maxx = keys[0], keys[-1] - - total = reduce(op, f.values(), start) - - if decreasing: - missing = lambda x: start if x > maxx else total - domain = range(maxx, minx - 1, -1) - else: - missing = lambda x: start if x < minx else total - domain = range(minx, maxx + 1) - - out = missingdict(missing) - - v = start - for x in domain: - v = op(v, f[x]) - out[x] = v - - return out - - -def byteCost(widths, default, nominal): - - if not hasattr(widths, "items"): - d = defaultdict(int) - for w in widths: - d[w] += 1 - widths = d - - cost = 0 - for w, freq in widths.items(): - if w == default: - continue - diff = abs(w - nominal) - if diff <= 107: - cost += freq - elif diff <= 1131: - cost += freq * 2 - else: - cost += freq * 5 - return cost - - -def optimizeWidthsBruteforce(widths): - """Bruteforce version. Veeeeeeeeeeeeeeeeery slow. Only works for smallests of fonts.""" - - d = defaultdict(int) - for w in widths: - d[w] += 1 - - # Maximum number of bytes using default can possibly save - maxDefaultAdvantage = 5 * max(d.values()) - - minw, maxw = min(widths), max(widths) - domain = list(range(minw, maxw + 1)) - - bestCostWithoutDefault = min(byteCost(widths, None, nominal) for nominal in domain) - - bestCost = len(widths) * 5 + 1 - for nominal in domain: - if byteCost(widths, None, nominal) > bestCost + maxDefaultAdvantage: - continue - for default in domain: - cost = byteCost(widths, default, nominal) - if cost < bestCost: - bestCost = cost - bestDefault = default - bestNominal = nominal - - return bestDefault, bestNominal - - -def optimizeWidths(widths): - """Given a list of glyph widths, or dictionary mapping glyph width to number of - glyphs having that, returns a tuple of best CFF default and nominal glyph widths. - - This algorithm is linear in UPEM+numGlyphs.""" - - if not hasattr(widths, "items"): - d = defaultdict(int) - for w in widths: - d[w] += 1 - widths = d - - keys = sorted(widths.keys()) - minw, maxw = keys[0], keys[-1] - domain = list(range(minw, maxw + 1)) - - # Cumulative sum/max forward/backward. - cumFrqU = cumSum(widths, op=add) - cumMaxU = cumSum(widths, op=max) - cumFrqD = cumSum(widths, op=add, decreasing=True) - cumMaxD = cumSum(widths, op=max, decreasing=True) - - # Cost per nominal choice, without default consideration. - nomnCostU = missingdict( - lambda x: cumFrqU[x] + cumFrqU[x - 108] + cumFrqU[x - 1132] * 3 - ) - nomnCostD = missingdict( - lambda x: cumFrqD[x] + cumFrqD[x + 108] + cumFrqD[x + 1132] * 3 - ) - nomnCost = missingdict(lambda x: nomnCostU[x] + nomnCostD[x] - widths[x]) - - # Cost-saving per nominal choice, by best default choice. - dfltCostU = missingdict( - lambda x: max(cumMaxU[x], cumMaxU[x - 108] * 2, cumMaxU[x - 1132] * 5) - ) - dfltCostD = missingdict( - lambda x: max(cumMaxD[x], cumMaxD[x + 108] * 2, cumMaxD[x + 1132] * 5) - ) - dfltCost = missingdict(lambda x: max(dfltCostU[x], dfltCostD[x])) - - # Combined cost per nominal choice. - bestCost = missingdict(lambda x: nomnCost[x] - dfltCost[x]) - - # Best nominal. - nominal = min(domain, key=lambda x: bestCost[x]) - - # Work back the best default. - bestC = bestCost[nominal] - dfltC = nomnCost[nominal] - bestCost[nominal] - ends = [] - if dfltC == dfltCostU[nominal]: - starts = [nominal, nominal - 108, nominal - 1132] - for start in starts: - while cumMaxU[start] and cumMaxU[start] == cumMaxU[start - 1]: - start -= 1 - ends.append(start) - else: - starts = [nominal, nominal + 108, nominal + 1132] - for start in starts: - while cumMaxD[start] and cumMaxD[start] == cumMaxD[start + 1]: - start += 1 - ends.append(start) - default = min(ends, key=lambda default: byteCost(widths, default, nominal)) - - return default, nominal - - -def main(args=None): - """Calculate optimum defaultWidthX/nominalWidthX values""" - - import argparse - - parser = argparse.ArgumentParser( - "fonttools cffLib.width", - description=main.__doc__, - ) - parser.add_argument( - "inputs", metavar="FILE", type=str, nargs="+", help="Input TTF files" - ) - parser.add_argument( - "-b", - "--brute-force", - dest="brute", - action="store_true", - help="Use brute-force approach (VERY slow)", - ) - - args = parser.parse_args(args) - - for fontfile in args.inputs: - font = TTFont(fontfile) - hmtx = font["hmtx"] - widths = [m[0] for m in hmtx.metrics.values()] - if args.brute: - default, nominal = optimizeWidthsBruteforce(widths) - else: - default, nominal = optimizeWidths(widths) - print( - "glyphs=%d default=%d nominal=%d byteCost=%d" - % (len(widths), default, nominal, byteCost(widths, default, nominal)) - ) - - -if __name__ == "__main__": - import sys - - if len(sys.argv) == 1: - import doctest - - sys.exit(doctest.testmod().failed) - main() diff --git a/spaces/cmudrc/kaboom/app.py b/spaces/cmudrc/kaboom/app.py deleted file mode 100644 index 5b5d55de115c919fad3741dc38280f2840690673..0000000000000000000000000000000000000000 --- a/spaces/cmudrc/kaboom/app.py +++ /dev/null @@ -1,28 +0,0 @@ -import gradio -import kaboom -from kaboom.carMakers import runCarDesignProblem - -def run_kaboom(reps, steps, comms): - - #create a parameters object - #parameters (p.nAgents = 33, p.nTeams = 11, p.nDims = 56) are automatically set for this problem. - parameters = kaboom.params.Params() - parameters.reps = reps - parameters.pComm = comms - parameters.steps = steps - - #run the simulation with the car designer objective - team = runCarDesignProblem(parameters) - - #check the performance of the team - #invert score *-1 so that higher score = better performance - return team.getBestScore()*-1 - -gradio.Interface( - fn = run_kaboom, - inputs = [ - gradio.Number(label="Number of teams", value=2), - gradio.Number(label="Steps", value=300), - gradio.Number(label="Probability of Communication", value=0.2), - ], outputs = [gradio.Number(label="Performance")] -).launch(debug=True) \ No newline at end of file diff --git a/spaces/cncn102/bingo1/src/components/settings.tsx b/spaces/cncn102/bingo1/src/components/settings.tsx deleted file mode 100644 index 80b8a2d3b252b875f5b6f7dfc2f6e3ad9cdfb22a..0000000000000000000000000000000000000000 --- a/spaces/cncn102/bingo1/src/components/settings.tsx +++ /dev/null @@ -1,157 +0,0 @@ -import { useEffect, useState } from 'react' -import { useAtom } from 'jotai' -import { Switch } from '@headlessui/react' -import { toast } from 'react-hot-toast' -import { hashAtom, voiceAtom } from '@/state' -import { - Dialog, - DialogContent, - DialogDescription, - DialogFooter, - DialogHeader, - DialogTitle -} from '@/components/ui/dialog' -import { Button } from './ui/button' -import { Input } from './ui/input' -import { ChunkKeys, parseCookies, extraCurlFromCookie, encodeHeadersToCookie, getCookie, setCookie } from '@/lib/utils' -import { ExternalLink } from './external-link' -import { useCopyToClipboard } from '@/lib/hooks/use-copy-to-clipboard' - - -export function Settings() { - const { isCopied, copyToClipboard } = useCopyToClipboard({ timeout: 2000 }) - const [loc, setLoc] = useAtom(hashAtom) - const [curlValue, setCurlValue] = useState(extraCurlFromCookie(parseCookies(document.cookie, ChunkKeys))) - const [imageOnly, setImageOnly] = useState(getCookie('IMAGE_ONLY') !== '0') - const [enableTTS, setEnableTTS] = useAtom(voiceAtom) - - useEffect(() => { - if (isCopied) { - toast.success('复制成功') - } - }, [isCopied]) - - if (loc === 'settings') { - return ( - setLoc('')} modal> - - - 设置你的用户信息 - - 请使用 Edge 浏览器 - - 打开并登录 Bing - - ,然后再打开 - Challenge 接口 - 右键 》检查。打开开发者工具,在网络里面找到 Create 接口 》右键复制》复制为 cURL(bash),粘贴到此处,然后保存。 -
                                                                                                                                              - 图文示例: - 如何获取 BING_HEADER - - -
                                                                                                                                              - -
                                                                                                                                              - setCurlValue(e.target.value)} - /> -
                                                                                                                                              - 身份信息仅用于画图(推荐) - setImageOnly(checked)} - > - - -
                                                                                                                                              - - - - - - - -
                                                                                                                                              - ) - } else if (loc === 'voice') { - return ( - setLoc('')} modal> - - - 语音设置 - - 目前仅支持 PC 端 Edge 及 Chrome 浏览器 - - - -
                                                                                                                                              - 启用语音回答 - setEnableTTS(checked)} - > - - -
                                                                                                                                              - - - - -
                                                                                                                                              -
                                                                                                                                              - ) - } - return null -} diff --git a/spaces/codelion/Grounding_DINO_demo/groundingdino/util/slio.py b/spaces/codelion/Grounding_DINO_demo/groundingdino/util/slio.py deleted file mode 100644 index 72c1f0f7b82cdc931d381feef64fe15815ba657e..0000000000000000000000000000000000000000 --- a/spaces/codelion/Grounding_DINO_demo/groundingdino/util/slio.py +++ /dev/null @@ -1,177 +0,0 @@ -# ========================================================== -# Modified from mmcv -# ========================================================== - -import json -import pickle -from abc import ABCMeta, abstractmethod -from pathlib import Path - -import yaml - -try: - from yaml import CLoader as Loader, CDumper as Dumper -except ImportError: - from yaml import Loader, Dumper - - -# =========================== -# Rigister handler -# =========================== - - -class BaseFileHandler(metaclass=ABCMeta): - @abstractmethod - def load_from_fileobj(self, file, **kwargs): - pass - - @abstractmethod - def dump_to_fileobj(self, obj, file, **kwargs): - pass - - @abstractmethod - def dump_to_str(self, obj, **kwargs): - pass - - def load_from_path(self, filepath, mode="r", **kwargs): - with open(filepath, mode) as f: - return self.load_from_fileobj(f, **kwargs) - - def dump_to_path(self, obj, filepath, mode="w", **kwargs): - with open(filepath, mode) as f: - self.dump_to_fileobj(obj, f, **kwargs) - - -class JsonHandler(BaseFileHandler): - def load_from_fileobj(self, file): - return json.load(file) - - def dump_to_fileobj(self, obj, file, **kwargs): - json.dump(obj, file, **kwargs) - - def dump_to_str(self, obj, **kwargs): - return json.dumps(obj, **kwargs) - - -class PickleHandler(BaseFileHandler): - def load_from_fileobj(self, file, **kwargs): - return pickle.load(file, **kwargs) - - def load_from_path(self, filepath, **kwargs): - return super(PickleHandler, self).load_from_path(filepath, mode="rb", **kwargs) - - def dump_to_str(self, obj, **kwargs): - kwargs.setdefault("protocol", 2) - return pickle.dumps(obj, **kwargs) - - def dump_to_fileobj(self, obj, file, **kwargs): - kwargs.setdefault("protocol", 2) - pickle.dump(obj, file, **kwargs) - - def dump_to_path(self, obj, filepath, **kwargs): - super(PickleHandler, self).dump_to_path(obj, filepath, mode="wb", **kwargs) - - -class YamlHandler(BaseFileHandler): - def load_from_fileobj(self, file, **kwargs): - kwargs.setdefault("Loader", Loader) - return yaml.load(file, **kwargs) - - def dump_to_fileobj(self, obj, file, **kwargs): - kwargs.setdefault("Dumper", Dumper) - yaml.dump(obj, file, **kwargs) - - def dump_to_str(self, obj, **kwargs): - kwargs.setdefault("Dumper", Dumper) - return yaml.dump(obj, **kwargs) - - -file_handlers = { - "json": JsonHandler(), - "yaml": YamlHandler(), - "yml": YamlHandler(), - "pickle": PickleHandler(), - "pkl": PickleHandler(), -} - -# =========================== -# load and dump -# =========================== - - -def is_str(x): - """Whether the input is an string instance. - - Note: This method is deprecated since python 2 is no longer supported. - """ - return isinstance(x, str) - - -def slload(file, file_format=None, **kwargs): - """Load data from json/yaml/pickle files. - - This method provides a unified api for loading data from serialized files. - - Args: - file (str or :obj:`Path` or file-like object): Filename or a file-like - object. - file_format (str, optional): If not specified, the file format will be - inferred from the file extension, otherwise use the specified one. - Currently supported formats include "json", "yaml/yml" and - "pickle/pkl". - - Returns: - The content from the file. - """ - if isinstance(file, Path): - file = str(file) - if file_format is None and is_str(file): - file_format = file.split(".")[-1] - if file_format not in file_handlers: - raise TypeError(f"Unsupported format: {file_format}") - - handler = file_handlers[file_format] - if is_str(file): - obj = handler.load_from_path(file, **kwargs) - elif hasattr(file, "read"): - obj = handler.load_from_fileobj(file, **kwargs) - else: - raise TypeError('"file" must be a filepath str or a file-object') - return obj - - -def sldump(obj, file=None, file_format=None, **kwargs): - """Dump data to json/yaml/pickle strings or files. - - This method provides a unified api for dumping data as strings or to files, - and also supports custom arguments for each file format. - - Args: - obj (any): The python object to be dumped. - file (str or :obj:`Path` or file-like object, optional): If not - specified, then the object is dump to a str, otherwise to a file - specified by the filename or file-like object. - file_format (str, optional): Same as :func:`load`. - - Returns: - bool: True for success, False otherwise. - """ - if isinstance(file, Path): - file = str(file) - if file_format is None: - if is_str(file): - file_format = file.split(".")[-1] - elif file is None: - raise ValueError("file_format must be specified since file is None") - if file_format not in file_handlers: - raise TypeError(f"Unsupported format: {file_format}") - - handler = file_handlers[file_format] - if file is None: - return handler.dump_to_str(obj, **kwargs) - elif is_str(file): - handler.dump_to_path(obj, file, **kwargs) - elif hasattr(file, "write"): - handler.dump_to_fileobj(obj, file, **kwargs) - else: - raise TypeError('"file" must be a filename str or a file-object') diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/aacenc_ltp.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/aacenc_ltp.c deleted file mode 100644 index f7fb85bbf8d47fccc79f6dba8df7e4c0520964e6..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/aacenc_ltp.c +++ /dev/null @@ -1,236 +0,0 @@ -/* - * AAC encoder long term prediction extension - * Copyright (C) 2015 Rostislav Pehlivanov - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -/** - * @file - * AAC encoder long term prediction extension - * @author Rostislav Pehlivanov ( atomnuker gmail com ) - */ - -#include "aacenc_ltp.h" -#include "aacenc_quantization.h" -#include "aacenc_utils.h" - -/** - * Encode LTP data. - */ -void ff_aac_encode_ltp_info(AACEncContext *s, SingleChannelElement *sce, - int common_window) -{ - int i; - IndividualChannelStream *ics = &sce->ics; - if (s->profile != FF_PROFILE_AAC_LTP || !ics->predictor_present) - return; - if (common_window) - put_bits(&s->pb, 1, 0); - put_bits(&s->pb, 1, ics->ltp.present); - if (!ics->ltp.present) - return; - put_bits(&s->pb, 11, ics->ltp.lag); - put_bits(&s->pb, 3, ics->ltp.coef_idx); - for (i = 0; i < FFMIN(ics->max_sfb, MAX_LTP_LONG_SFB); i++) - put_bits(&s->pb, 1, ics->ltp.used[i]); -} - -void ff_aac_ltp_insert_new_frame(AACEncContext *s) -{ - int i, ch, tag, chans, cur_channel, start_ch = 0; - ChannelElement *cpe; - SingleChannelElement *sce; - for (i = 0; i < s->chan_map[0]; i++) { - cpe = &s->cpe[i]; - tag = s->chan_map[i+1]; - chans = tag == TYPE_CPE ? 2 : 1; - for (ch = 0; ch < chans; ch++) { - sce = &cpe->ch[ch]; - cur_channel = start_ch + ch; - /* New sample + overlap */ - memcpy(&sce->ltp_state[0], &sce->ltp_state[1024], 1024*sizeof(sce->ltp_state[0])); - memcpy(&sce->ltp_state[1024], &s->planar_samples[cur_channel][2048], 1024*sizeof(sce->ltp_state[0])); - memcpy(&sce->ltp_state[2048], &sce->ret_buf[0], 1024*sizeof(sce->ltp_state[0])); - sce->ics.ltp.lag = 0; - } - start_ch += chans; - } -} - -static void get_lag(float *buf, const float *new, LongTermPrediction *ltp) -{ - int i, j, lag = 0, max_corr = 0; - float max_ratio = 0.0f; - for (i = 0; i < 2048; i++) { - float corr, s0 = 0.0f, s1 = 0.0f; - const int start = FFMAX(0, i - 1024); - for (j = start; j < 2048; j++) { - const int idx = j - i + 1024; - s0 += new[j]*buf[idx]; - s1 += buf[idx]*buf[idx]; - } - corr = s1 > 0.0f ? s0/sqrt(s1) : 0.0f; - if (corr > max_corr) { - max_corr = corr; - lag = i; - max_ratio = corr/(2048-start); - } - } - ltp->lag = FFMAX(av_clip_uintp2(lag, 11), 0); - ltp->coef_idx = quant_array_idx(max_ratio, ltp_coef, 8); - ltp->coef = ltp_coef[ltp->coef_idx]; -} - -static void generate_samples(float *buf, LongTermPrediction *ltp) -{ - int i, samples_num = 2048; - if (!ltp->lag) { - ltp->present = 0; - return; - } else if (ltp->lag < 1024) { - samples_num = ltp->lag + 1024; - } - for (i = 0; i < samples_num; i++) - buf[i] = ltp->coef*buf[i + 2048 - ltp->lag]; - memset(&buf[i], 0, (2048 - i)*sizeof(float)); -} - -/** - * Process LTP parameters - * @see Patent WO2006070265A1 - */ -void ff_aac_update_ltp(AACEncContext *s, SingleChannelElement *sce) -{ - float *pred_signal = &sce->ltp_state[0]; - const float *samples = &s->planar_samples[s->cur_channel][1024]; - - if (s->profile != FF_PROFILE_AAC_LTP) - return; - - /* Calculate lag */ - get_lag(pred_signal, samples, &sce->ics.ltp); - generate_samples(pred_signal, &sce->ics.ltp); -} - -void ff_aac_adjust_common_ltp(AACEncContext *s, ChannelElement *cpe) -{ - int sfb, count = 0; - SingleChannelElement *sce0 = &cpe->ch[0]; - SingleChannelElement *sce1 = &cpe->ch[1]; - - if (!cpe->common_window || - sce0->ics.window_sequence[0] == EIGHT_SHORT_SEQUENCE || - sce1->ics.window_sequence[0] == EIGHT_SHORT_SEQUENCE) { - sce0->ics.ltp.present = 0; - return; - } - - for (sfb = 0; sfb < FFMIN(sce0->ics.max_sfb, MAX_LTP_LONG_SFB); sfb++) { - int sum = sce0->ics.ltp.used[sfb] + sce1->ics.ltp.used[sfb]; - if (sum != 2) { - sce0->ics.ltp.used[sfb] = 0; - } else { - count++; - } - } - - sce0->ics.ltp.present = !!count; - sce0->ics.predictor_present = !!count; -} - -/** - * Mark LTP sfb's - */ -void ff_aac_search_for_ltp(AACEncContext *s, SingleChannelElement *sce, - int common_window) -{ - int w, g, w2, i, start = 0, count = 0; - int saved_bits = -(15 + FFMIN(sce->ics.max_sfb, MAX_LTP_LONG_SFB)); - float *C34 = &s->scoefs[128*0], *PCD = &s->scoefs[128*1]; - float *PCD34 = &s->scoefs[128*2]; - const int max_ltp = FFMIN(sce->ics.max_sfb, MAX_LTP_LONG_SFB); - - if (sce->ics.window_sequence[0] == EIGHT_SHORT_SEQUENCE) { - if (sce->ics.ltp.lag) { - memset(&sce->ltp_state[0], 0, 3072*sizeof(sce->ltp_state[0])); - memset(&sce->ics.ltp, 0, sizeof(LongTermPrediction)); - } - return; - } - - if (!sce->ics.ltp.lag || s->lambda > 120.0f) - return; - - for (w = 0; w < sce->ics.num_windows; w += sce->ics.group_len[w]) { - start = 0; - for (g = 0; g < sce->ics.num_swb; g++) { - int bits1 = 0, bits2 = 0; - float dist1 = 0.0f, dist2 = 0.0f; - if (w*16+g > max_ltp) { - start += sce->ics.swb_sizes[g]; - continue; - } - for (w2 = 0; w2 < sce->ics.group_len[w]; w2++) { - int bits_tmp1, bits_tmp2; - FFPsyBand *band = &s->psy.ch[s->cur_channel].psy_bands[(w+w2)*16+g]; - for (i = 0; i < sce->ics.swb_sizes[g]; i++) - PCD[i] = sce->coeffs[start+(w+w2)*128+i] - sce->lcoeffs[start+(w+w2)*128+i]; - s->abs_pow34(C34, &sce->coeffs[start+(w+w2)*128], sce->ics.swb_sizes[g]); - s->abs_pow34(PCD34, PCD, sce->ics.swb_sizes[g]); - dist1 += quantize_band_cost(s, &sce->coeffs[start+(w+w2)*128], C34, sce->ics.swb_sizes[g], - sce->sf_idx[(w+w2)*16+g], sce->band_type[(w+w2)*16+g], - s->lambda/band->threshold, INFINITY, &bits_tmp1, NULL); - dist2 += quantize_band_cost(s, PCD, PCD34, sce->ics.swb_sizes[g], - sce->sf_idx[(w+w2)*16+g], - sce->band_type[(w+w2)*16+g], - s->lambda/band->threshold, INFINITY, &bits_tmp2, NULL); - bits1 += bits_tmp1; - bits2 += bits_tmp2; - } - if (dist2 < dist1 && bits2 < bits1) { - for (w2 = 0; w2 < sce->ics.group_len[w]; w2++) - for (i = 0; i < sce->ics.swb_sizes[g]; i++) - sce->coeffs[start+(w+w2)*128+i] -= sce->lcoeffs[start+(w+w2)*128+i]; - sce->ics.ltp.used[w*16+g] = 1; - saved_bits += bits1 - bits2; - count++; - } - start += sce->ics.swb_sizes[g]; - } - } - - sce->ics.ltp.present = !!count && (saved_bits >= 0); - sce->ics.predictor_present = !!sce->ics.ltp.present; - - /* Reset any marked sfbs */ - if (!sce->ics.ltp.present && !!count) { - for (w = 0; w < sce->ics.num_windows; w += sce->ics.group_len[w]) { - start = 0; - for (g = 0; g < sce->ics.num_swb; g++) { - if (sce->ics.ltp.used[w*16+g]) { - for (w2 = 0; w2 < sce->ics.group_len[w]; w2++) { - for (i = 0; i < sce->ics.swb_sizes[g]; i++) { - sce->coeffs[start+(w+w2)*128+i] += sce->lcoeffs[start+(w+w2)*128+i]; - } - } - } - start += sce->ics.swb_sizes[g]; - } - } - } -} diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/cbs_sei_syntax_template.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/cbs_sei_syntax_template.c deleted file mode 100644 index 6a7cc36ddaf4bc1647cf897cb510216be4a92a6e..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/cbs_sei_syntax_template.c +++ /dev/null @@ -1,339 +0,0 @@ -/* - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -static int FUNC(filler_payload) - (CodedBitstreamContext *ctx, RWContext *rw, - SEIRawFillerPayload *current, SEIMessageState *state) -{ - int err, i; - - HEADER("Filler Payload"); - -#ifdef READ - current->payload_size = state->payload_size; -#endif - - for (i = 0; i < current->payload_size; i++) - fixed(8, ff_byte, 0xff); - - return 0; -} - -static int FUNC(user_data_registered) - (CodedBitstreamContext *ctx, RWContext *rw, - SEIRawUserDataRegistered *current, SEIMessageState *state) -{ - int err, i, j; - - HEADER("User Data Registered ITU-T T.35"); - - u(8, itu_t_t35_country_code, 0x00, 0xff); - if (current->itu_t_t35_country_code != 0xff) - i = 1; - else { - u(8, itu_t_t35_country_code_extension_byte, 0x00, 0xff); - i = 2; - } - -#ifdef READ - if (state->payload_size < i) { - av_log(ctx->log_ctx, AV_LOG_ERROR, - "Invalid SEI user data registered payload.\n"); - return AVERROR_INVALIDDATA; - } - current->data_length = state->payload_size - i; -#endif - - allocate(current->data, current->data_length); - for (j = 0; j < current->data_length; j++) - xu(8, itu_t_t35_payload_byte[], current->data[j], 0x00, 0xff, 1, i + j); - - return 0; -} - -static int FUNC(user_data_unregistered) - (CodedBitstreamContext *ctx, RWContext *rw, - SEIRawUserDataUnregistered *current, SEIMessageState *state) -{ - int err, i; - - HEADER("User Data Unregistered"); - -#ifdef READ - if (state->payload_size < 16) { - av_log(ctx->log_ctx, AV_LOG_ERROR, - "Invalid SEI user data unregistered payload.\n"); - return AVERROR_INVALIDDATA; - } - current->data_length = state->payload_size - 16; -#endif - - for (i = 0; i < 16; i++) - us(8, uuid_iso_iec_11578[i], 0x00, 0xff, 1, i); - - allocate(current->data, current->data_length); - - for (i = 0; i < current->data_length; i++) - xu(8, user_data_payload_byte[i], current->data[i], 0x00, 0xff, 1, i); - - return 0; -} - -static int FUNC(mastering_display_colour_volume) - (CodedBitstreamContext *ctx, RWContext *rw, - SEIRawMasteringDisplayColourVolume *current, SEIMessageState *state) -{ - int err, c; - - HEADER("Mastering Display Colour Volume"); - - for (c = 0; c < 3; c++) { - ubs(16, display_primaries_x[c], 1, c); - ubs(16, display_primaries_y[c], 1, c); - } - - ub(16, white_point_x); - ub(16, white_point_y); - - ub(32, max_display_mastering_luminance); - ub(32, min_display_mastering_luminance); - - return 0; -} - -static int FUNC(content_light_level_info) - (CodedBitstreamContext *ctx, RWContext *rw, - SEIRawContentLightLevelInfo *current, SEIMessageState *state) -{ - int err; - - HEADER("Content Light Level Information"); - - ub(16, max_content_light_level); - ub(16, max_pic_average_light_level); - - return 0; -} - -static int FUNC(alternative_transfer_characteristics) - (CodedBitstreamContext *ctx, RWContext *rw, - SEIRawAlternativeTransferCharacteristics *current, - SEIMessageState *state) -{ - int err; - - HEADER("Alternative Transfer Characteristics"); - - ub(8, preferred_transfer_characteristics); - - return 0; -} - -static int FUNC(ambient_viewing_environment) - (CodedBitstreamContext *ctx, RWContext *rw, - SEIRawAmbientViewingEnvironment *current, - SEIMessageState *state) -{ - static const uint16_t max_ambient_light_value = 50000; - int err; - - HEADER("Ambient Viewing Environment"); - - u(32, ambient_illuminance, 1, MAX_UINT_BITS(32)); - u(16, ambient_light_x, 0, max_ambient_light_value); - u(16, ambient_light_y, 0, max_ambient_light_value); - - return 0; -} - -static int FUNC(message)(CodedBitstreamContext *ctx, RWContext *rw, - SEIRawMessage *current) -{ - const SEIMessageTypeDescriptor *desc; - int err, i; - - desc = ff_cbs_sei_find_type(ctx, current->payload_type); - if (desc) { - SEIMessageState state = { - .payload_type = current->payload_type, - .payload_size = current->payload_size, - .extension_present = current->extension_bit_length > 0, - }; - int start_position, current_position, bits_written; - -#ifdef READ - CHECK(ff_cbs_sei_alloc_message_payload(current, desc)); -#endif - - start_position = bit_position(rw); - - CHECK(desc->READWRITE(ctx, rw, current->payload, &state)); - - current_position = bit_position(rw); - bits_written = current_position - start_position; - - if (byte_alignment(rw) || state.extension_present || - bits_written < 8 * current->payload_size) { - size_t bits_left; - -#ifdef READ - GetBitContext tmp = *rw; - int trailing_bits, trailing_zero_bits; - - bits_left = 8 * current->payload_size - bits_written; - if (bits_left > 8) - skip_bits_long(&tmp, bits_left - 8); - trailing_bits = get_bits(&tmp, FFMIN(bits_left, 8)); - if (trailing_bits == 0) { - // The trailing bits must contain a bit_equal_to_one, so - // they can't all be zero. - return AVERROR_INVALIDDATA; - } - trailing_zero_bits = ff_ctz(trailing_bits); - current->extension_bit_length = - bits_left - 1 - trailing_zero_bits; -#endif - - if (current->extension_bit_length > 0) { - allocate(current->extension_data, - (current->extension_bit_length + 7) / 8); - - bits_left = current->extension_bit_length; - for (i = 0; bits_left > 0; i++) { - int length = FFMIN(bits_left, 8); - xu(length, reserved_payload_extension_data, - current->extension_data[i], - 0, MAX_UINT_BITS(length), 0); - bits_left -= length; - } - } - - fixed(1, bit_equal_to_one, 1); - while (byte_alignment(rw)) - fixed(1, bit_equal_to_zero, 0); - } - -#ifdef WRITE - current->payload_size = (put_bits_count(rw) - start_position) / 8; -#endif - } else { - uint8_t *data; - - allocate(current->payload, current->payload_size); - data = current->payload; - - for (i = 0; i < current->payload_size; i++) - xu(8, payload_byte[i], data[i], 0, 255, 1, i); - } - - return 0; -} - -static int FUNC(message_list)(CodedBitstreamContext *ctx, RWContext *rw, - SEIRawMessageList *current, int prefix) -{ - SEIRawMessage *message; - int err, k; - -#ifdef READ - for (k = 0;; k++) { - uint32_t payload_type = 0; - uint32_t payload_size = 0; - uint32_t tmp; - GetBitContext payload_gbc; - - while (show_bits(rw, 8) == 0xff) { - fixed(8, ff_byte, 0xff); - payload_type += 255; - } - xu(8, last_payload_type_byte, tmp, 0, 254, 0); - payload_type += tmp; - - while (show_bits(rw, 8) == 0xff) { - fixed(8, ff_byte, 0xff); - payload_size += 255; - } - xu(8, last_payload_size_byte, tmp, 0, 254, 0); - payload_size += tmp; - - // There must be space remaining for both the payload and - // the trailing bits on the SEI NAL unit. - if (payload_size + 1 > get_bits_left(rw) / 8) { - av_log(ctx->log_ctx, AV_LOG_ERROR, - "Invalid SEI message: payload_size too large " - "(%"PRIu32" bytes).\n", payload_size); - return AVERROR_INVALIDDATA; - } - CHECK(init_get_bits(&payload_gbc, rw->buffer, - get_bits_count(rw) + 8 * payload_size)); - skip_bits_long(&payload_gbc, get_bits_count(rw)); - - CHECK(ff_cbs_sei_list_add(current)); - message = ¤t->messages[k]; - - message->payload_type = payload_type; - message->payload_size = payload_size; - - CHECK(FUNC(message)(ctx, &payload_gbc, message)); - - skip_bits_long(rw, 8 * payload_size); - - if (!cbs_h2645_read_more_rbsp_data(rw)) - break; - } -#else - for (k = 0; k < current->nb_messages; k++) { - PutBitContext start_state; - uint32_t tmp; - int trace, i; - - message = ¤t->messages[k]; - - // We write the payload twice in order to find the size. Trace - // output is switched off for the first write. - trace = ctx->trace_enable; - ctx->trace_enable = 0; - - start_state = *rw; - for (i = 0; i < 2; i++) { - *rw = start_state; - - tmp = message->payload_type; - while (tmp >= 255) { - fixed(8, ff_byte, 0xff); - tmp -= 255; - } - xu(8, last_payload_type_byte, tmp, 0, 254, 0); - - tmp = message->payload_size; - while (tmp >= 255) { - fixed(8, ff_byte, 0xff); - tmp -= 255; - } - xu(8, last_payload_size_byte, tmp, 0, 254, 0); - - err = FUNC(message)(ctx, rw, message); - ctx->trace_enable = trace; - if (err < 0) - return err; - } - } -#endif - - return 0; -} diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/dca_sample_rate_tab.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/dca_sample_rate_tab.c deleted file mode 100644 index 16ee04b1d23b0b59c76cecf0e4028d354d411ecd..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/dca_sample_rate_tab.c +++ /dev/null @@ -1,25 +0,0 @@ -/* - * DCA sample rates - * Copyright (C) 2004 Gildas Bazin - * Copyright (C) 2004 Benjamin Zores - * Copyright (C) 2006 Benjamin Larsson - * Copyright (C) 2007 Konstantin Shishkov - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -#include "dca_sample_rate_tab.h" diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/h261dec.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/h261dec.c deleted file mode 100644 index 849629396438a599e8c8db11a08774a59e88e765..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/h261dec.c +++ /dev/null @@ -1,701 +0,0 @@ -/* - * H.261 decoder - * Copyright (c) 2002-2004 Michael Niedermayer - * Copyright (c) 2004 Maarten Daniels - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -/** - * @file - * H.261 decoder. - */ - -#include "libavutil/avassert.h" -#include "libavutil/thread.h" -#include "avcodec.h" -#include "codec_internal.h" -#include "decode.h" -#include "mpeg_er.h" -#include "mpegutils.h" -#include "mpegvideo.h" -#include "mpegvideodec.h" -#include "h261.h" - -#define H261_MBA_VLC_BITS 8 -#define H261_MTYPE_VLC_BITS 6 -#define H261_MV_VLC_BITS 7 -#define H261_CBP_VLC_BITS 9 -#define TCOEFF_VLC_BITS 9 -#define MBA_STUFFING 33 -#define MBA_STARTCODE 34 - -static VLC h261_mba_vlc; -static VLC h261_mtype_vlc; -static VLC h261_mv_vlc; -static VLC h261_cbp_vlc; - -typedef struct H261DecContext { - MpegEncContext s; - - H261Context common; - - int current_mba; - int mba_diff; - int current_mv_x; - int current_mv_y; - int gob_number; - int gob_start_code_skipped; // 1 if gob start code is already read before gob header is read -} H261DecContext; - -static av_cold void h261_decode_init_static(void) -{ - INIT_VLC_STATIC(&h261_mba_vlc, H261_MBA_VLC_BITS, 35, - ff_h261_mba_bits, 1, 1, - ff_h261_mba_code, 1, 1, 540); - INIT_VLC_STATIC(&h261_mtype_vlc, H261_MTYPE_VLC_BITS, 10, - ff_h261_mtype_bits, 1, 1, - ff_h261_mtype_code, 1, 1, 80); - INIT_VLC_STATIC(&h261_mv_vlc, H261_MV_VLC_BITS, 17, - &ff_h261_mv_tab[0][1], 2, 1, - &ff_h261_mv_tab[0][0], 2, 1, 144); - INIT_VLC_STATIC(&h261_cbp_vlc, H261_CBP_VLC_BITS, 63, - &ff_h261_cbp_tab[0][1], 2, 1, - &ff_h261_cbp_tab[0][0], 2, 1, 512); - INIT_FIRST_VLC_RL(ff_h261_rl_tcoeff, 552); -} - -static av_cold int h261_decode_init(AVCodecContext *avctx) -{ - static AVOnce init_static_once = AV_ONCE_INIT; - H261DecContext *const h = avctx->priv_data; - MpegEncContext *const s = &h->s; - - s->private_ctx = &h->common; - // set defaults - ff_mpv_decode_init(s, avctx); - - s->out_format = FMT_H261; - s->low_delay = 1; - avctx->pix_fmt = AV_PIX_FMT_YUV420P; - - h->gob_start_code_skipped = 0; - ff_mpv_idct_init(s); - - ff_thread_once(&init_static_once, h261_decode_init_static); - - return 0; -} - -static inline void h261_init_dest(MpegEncContext *s) -{ - const unsigned block_size = 8 >> s->avctx->lowres; - ff_init_block_index(s); - s->dest[0] += 2 * block_size; - s->dest[1] += block_size; - s->dest[2] += block_size; -} - -/** - * Decode the group of blocks header or slice header. - * @return <0 if an error occurred - */ -static int h261_decode_gob_header(H261DecContext *h) -{ - unsigned int val; - MpegEncContext *const s = &h->s; - - if (!h->gob_start_code_skipped) { - /* Check for GOB Start Code */ - val = show_bits(&s->gb, 15); - if (val) - return -1; - - /* We have a GBSC */ - skip_bits(&s->gb, 16); - } - - h->gob_start_code_skipped = 0; - - h->gob_number = get_bits(&s->gb, 4); /* GN */ - s->qscale = get_bits(&s->gb, 5); /* GQUANT */ - - /* Check if gob_number is valid */ - if (s->mb_height == 18) { // CIF - if ((h->gob_number <= 0) || (h->gob_number > 12)) - return -1; - } else { // QCIF - if ((h->gob_number != 1) && (h->gob_number != 3) && - (h->gob_number != 5)) - return -1; - } - - /* GEI */ - if (skip_1stop_8data_bits(&s->gb) < 0) - return AVERROR_INVALIDDATA; - - if (s->qscale == 0) { - av_log(s->avctx, AV_LOG_ERROR, "qscale has forbidden 0 value\n"); - if (s->avctx->err_recognition & (AV_EF_BITSTREAM | AV_EF_COMPLIANT)) - return -1; - } - - /* For the first transmitted macroblock in a GOB, MBA is the absolute - * address. For subsequent macroblocks, MBA is the difference between - * the absolute addresses of the macroblock and the last transmitted - * macroblock. */ - h->current_mba = 0; - h->mba_diff = 0; - - return 0; -} - -/** - * Decode the group of blocks / video packet header. - * @return <0 if no resync found - */ -static int h261_resync(H261DecContext *h) -{ - MpegEncContext *const s = &h->s; - int left, ret; - - if (h->gob_start_code_skipped) { - ret = h261_decode_gob_header(h); - if (ret >= 0) - return 0; - } else { - if (show_bits(&s->gb, 15) == 0) { - ret = h261_decode_gob_header(h); - if (ret >= 0) - return 0; - } - // OK, it is not where it is supposed to be ... - s->gb = s->last_resync_gb; - align_get_bits(&s->gb); - left = get_bits_left(&s->gb); - - for (; left > 15 + 1 + 4 + 5; left -= 8) { - if (show_bits(&s->gb, 15) == 0) { - GetBitContext bak = s->gb; - - ret = h261_decode_gob_header(h); - if (ret >= 0) - return 0; - - s->gb = bak; - } - skip_bits(&s->gb, 8); - } - } - - return -1; -} - -/** - * Decode skipped macroblocks. - * @return 0 - */ -static int h261_decode_mb_skipped(H261DecContext *h, int mba1, int mba2) -{ - MpegEncContext *const s = &h->s; - int i; - - s->mb_intra = 0; - - for (i = mba1; i < mba2; i++) { - int j, xy; - - s->mb_x = ((h->gob_number - 1) % 2) * 11 + i % 11; - s->mb_y = ((h->gob_number - 1) / 2) * 3 + i / 11; - xy = s->mb_x + s->mb_y * s->mb_stride; - h261_init_dest(s); - - for (j = 0; j < 6; j++) - s->block_last_index[j] = -1; - - s->mv_dir = MV_DIR_FORWARD; - s->mv_type = MV_TYPE_16X16; - s->current_picture.mb_type[xy] = MB_TYPE_SKIP | MB_TYPE_16x16 | MB_TYPE_L0; - s->mv[0][0][0] = 0; - s->mv[0][0][1] = 0; - s->mb_skipped = 1; - h->common.mtype &= ~MB_TYPE_H261_FIL; - - if (s->current_picture.motion_val[0]) { - int b_stride = 2*s->mb_width + 1; - int b_xy = 2 * s->mb_x + (2 * s->mb_y) * b_stride; - s->current_picture.motion_val[0][b_xy][0] = s->mv[0][0][0]; - s->current_picture.motion_val[0][b_xy][1] = s->mv[0][0][1]; - } - - ff_mpv_reconstruct_mb(s, s->block); - } - - return 0; -} - -static const int mvmap[17] = { - 0, -1, -2, -3, -4, -5, -6, -7, -8, -9, -10, -11, -12, -13, -14, -15, -16 -}; - -static int decode_mv_component(GetBitContext *gb, int v) -{ - int mv_diff = get_vlc2(gb, h261_mv_vlc.table, H261_MV_VLC_BITS, 2); - - /* check if mv_diff is valid */ - if (mv_diff < 0) - return v; - - mv_diff = mvmap[mv_diff]; - - if (mv_diff && !get_bits1(gb)) - mv_diff = -mv_diff; - - v += mv_diff; - if (v <= -16) - v += 32; - else if (v >= 16) - v -= 32; - - return v; -} - -/** - * Decode a macroblock. - * @return <0 if an error occurred - */ -static int h261_decode_block(H261DecContext *h, int16_t *block, int n, int coded) -{ - MpegEncContext *const s = &h->s; - int level, i, j, run; - RLTable *rl = &ff_h261_rl_tcoeff; - const uint8_t *scan_table; - - /* For the variable length encoding there are two code tables, one being - * used for the first transmitted LEVEL in INTER, INTER + MC and - * INTER + MC + FIL blocks, the second for all other LEVELs except the - * first one in INTRA blocks which is fixed length coded with 8 bits. - * NOTE: The two code tables only differ in one VLC so we handle that - * manually. */ - scan_table = s->intra_scantable.permutated; - if (s->mb_intra) { - /* DC coef */ - level = get_bits(&s->gb, 8); - // 0 (00000000b) and -128 (10000000b) are FORBIDDEN - if ((level & 0x7F) == 0) { - av_log(s->avctx, AV_LOG_ERROR, "illegal dc %d at %d %d\n", - level, s->mb_x, s->mb_y); - return -1; - } - /* The code 1000 0000 is not used, the reconstruction level of 1024 - * being coded as 1111 1111. */ - if (level == 255) - level = 128; - block[0] = level; - i = 1; - } else if (coded) { - // Run Level Code - // EOB Not possible for first level when cbp is available (that's why the table is different) - // 0 1 1s - // * * 0* - int check = show_bits(&s->gb, 2); - i = 0; - if (check & 0x2) { - skip_bits(&s->gb, 2); - block[0] = (check & 0x1) ? -1 : 1; - i = 1; - } - } else { - i = 0; - } - if (!coded) { - s->block_last_index[n] = i - 1; - return 0; - } - { - OPEN_READER(re, &s->gb); - i--; // offset by -1 to allow direct indexing of scan_table - for (;;) { - UPDATE_CACHE(re, &s->gb); - GET_RL_VLC(level, run, re, &s->gb, rl->rl_vlc[0], TCOEFF_VLC_BITS, 2, 0); - if (run == 66) { - if (level) { - CLOSE_READER(re, &s->gb); - av_log(s->avctx, AV_LOG_ERROR, "illegal ac vlc code at %dx%d\n", - s->mb_x, s->mb_y); - return -1; - } - /* escape */ - /* The remaining combinations of (run, level) are encoded with a - * 20-bit word consisting of 6 bits escape, 6 bits run and 8 bits - * level. */ - run = SHOW_UBITS(re, &s->gb, 6) + 1; - SKIP_CACHE(re, &s->gb, 6); - level = SHOW_SBITS(re, &s->gb, 8); - SKIP_COUNTER(re, &s->gb, 6 + 8); - } else if (level == 0) { - break; - } else { - if (SHOW_UBITS(re, &s->gb, 1)) - level = -level; - SKIP_COUNTER(re, &s->gb, 1); - } - i += run; - if (i >= 64) { - CLOSE_READER(re, &s->gb); - av_log(s->avctx, AV_LOG_ERROR, "run overflow at %dx%d\n", - s->mb_x, s->mb_y); - return -1; - } - j = scan_table[i]; - block[j] = level; - } - CLOSE_READER(re, &s->gb); - } - s->block_last_index[n] = i; - return 0; -} - -static int h261_decode_mb(H261DecContext *h) -{ - MpegEncContext *const s = &h->s; - H261Context *const com = &h->common; - int i, cbp, xy; - - cbp = 63; - // Read mba - do { - h->mba_diff = get_vlc2(&s->gb, h261_mba_vlc.table, - H261_MBA_VLC_BITS, 2); - - /* Check for slice end */ - /* NOTE: GOB can be empty (no MB data) or exist only of MBA_stuffing */ - if (h->mba_diff == MBA_STARTCODE) { // start code - h->gob_start_code_skipped = 1; - return SLICE_END; - } - } while (h->mba_diff == MBA_STUFFING); // stuffing - - if (h->mba_diff < 0) { - if (get_bits_left(&s->gb) <= 7) - return SLICE_END; - - av_log(s->avctx, AV_LOG_ERROR, "illegal mba at %d %d\n", s->mb_x, s->mb_y); - return SLICE_ERROR; - } - - h->mba_diff += 1; - h->current_mba += h->mba_diff; - - if (h->current_mba > MBA_STUFFING) - return SLICE_ERROR; - - s->mb_x = ((h->gob_number - 1) % 2) * 11 + ((h->current_mba - 1) % 11); - s->mb_y = ((h->gob_number - 1) / 2) * 3 + ((h->current_mba - 1) / 11); - xy = s->mb_x + s->mb_y * s->mb_stride; - h261_init_dest(s); - - // Read mtype - com->mtype = get_vlc2(&s->gb, h261_mtype_vlc.table, H261_MTYPE_VLC_BITS, 2); - if (com->mtype < 0) { - av_log(s->avctx, AV_LOG_ERROR, "Invalid mtype index %d\n", - com->mtype); - return SLICE_ERROR; - } - av_assert0(com->mtype < FF_ARRAY_ELEMS(ff_h261_mtype_map)); - com->mtype = ff_h261_mtype_map[com->mtype]; - - // Read mquant - if (IS_QUANT(com->mtype)) - ff_set_qscale(s, get_bits(&s->gb, 5)); - - s->mb_intra = IS_INTRA4x4(com->mtype); - - // Read mv - if (IS_16X16(com->mtype)) { - /* Motion vector data is included for all MC macroblocks. MVD is - * obtained from the macroblock vector by subtracting the vector - * of the preceding macroblock. For this calculation the vector - * of the preceding macroblock is regarded as zero in the - * following three situations: - * 1) evaluating MVD for macroblocks 1, 12 and 23; - * 2) evaluating MVD for macroblocks in which MBA does not represent a difference of 1; - * 3) MTYPE of the previous macroblock was not MC. */ - if ((h->current_mba == 1) || (h->current_mba == 12) || - (h->current_mba == 23) || (h->mba_diff != 1)) { - h->current_mv_x = 0; - h->current_mv_y = 0; - } - - h->current_mv_x = decode_mv_component(&s->gb, h->current_mv_x); - h->current_mv_y = decode_mv_component(&s->gb, h->current_mv_y); - } else { - h->current_mv_x = 0; - h->current_mv_y = 0; - } - - // Read cbp - if (HAS_CBP(com->mtype)) - cbp = get_vlc2(&s->gb, h261_cbp_vlc.table, H261_CBP_VLC_BITS, 1) + 1; - - if (s->mb_intra) { - s->current_picture.mb_type[xy] = MB_TYPE_INTRA; - goto intra; - } - - //set motion vectors - s->mv_dir = MV_DIR_FORWARD; - s->mv_type = MV_TYPE_16X16; - s->current_picture.mb_type[xy] = MB_TYPE_16x16 | MB_TYPE_L0; - s->mv[0][0][0] = h->current_mv_x * 2; // gets divided by 2 in motion compensation - s->mv[0][0][1] = h->current_mv_y * 2; - - if (s->current_picture.motion_val[0]) { - int b_stride = 2*s->mb_width + 1; - int b_xy = 2 * s->mb_x + (2 * s->mb_y) * b_stride; - s->current_picture.motion_val[0][b_xy][0] = s->mv[0][0][0]; - s->current_picture.motion_val[0][b_xy][1] = s->mv[0][0][1]; - } - -intra: - /* decode each block */ - if (s->mb_intra || HAS_CBP(com->mtype)) { - s->bdsp.clear_blocks(s->block[0]); - for (i = 0; i < 6; i++) { - if (h261_decode_block(h, s->block[i], i, cbp & 32) < 0) - return SLICE_ERROR; - cbp += cbp; - } - } else { - for (i = 0; i < 6; i++) - s->block_last_index[i] = -1; - } - - ff_mpv_reconstruct_mb(s, s->block); - - return SLICE_OK; -} - -/** - * Decode the H.261 picture header. - * @return <0 if no startcode found - */ -static int h261_decode_picture_header(H261DecContext *h) -{ - MpegEncContext *const s = &h->s; - int format, i; - uint32_t startcode = 0; - - for (i = get_bits_left(&s->gb); i > 24; i -= 1) { - startcode = ((startcode << 1) | get_bits(&s->gb, 1)) & 0x000FFFFF; - - if (startcode == 0x10) - break; - } - - if (startcode != 0x10) { - av_log(s->avctx, AV_LOG_ERROR, "Bad picture start code\n"); - return -1; - } - - /* temporal reference */ - i = get_bits(&s->gb, 5); /* picture timestamp */ - if (i < (s->picture_number & 31)) - i += 32; - s->picture_number = (s->picture_number & ~31) + i; - - s->avctx->framerate = (AVRational) { 30000, 1001 }; - - /* PTYPE starts here */ - skip_bits1(&s->gb); /* split screen off */ - skip_bits1(&s->gb); /* camera off */ - skip_bits1(&s->gb); /* freeze picture release off */ - - format = get_bits1(&s->gb); - - // only 2 formats possible - if (format == 0) { // QCIF - s->width = 176; - s->height = 144; - s->mb_width = 11; - s->mb_height = 9; - } else { // CIF - s->width = 352; - s->height = 288; - s->mb_width = 22; - s->mb_height = 18; - } - - s->mb_num = s->mb_width * s->mb_height; - - skip_bits1(&s->gb); /* still image mode off */ - skip_bits1(&s->gb); /* Reserved */ - - /* PEI */ - if (skip_1stop_8data_bits(&s->gb) < 0) - return AVERROR_INVALIDDATA; - - /* H.261 has no I-frames, but if we pass AV_PICTURE_TYPE_I for the first - * frame, the codec crashes if it does not contain all I-blocks - * (e.g. when a packet is lost). */ - s->pict_type = AV_PICTURE_TYPE_P; - - h->gob_number = 0; - return 0; -} - -static int h261_decode_gob(H261DecContext *h) -{ - MpegEncContext *const s = &h->s; - - ff_set_qscale(s, s->qscale); - - /* decode mb's */ - while (h->current_mba <= MBA_STUFFING) { - int ret; - /* DCT & quantize */ - ret = h261_decode_mb(h); - if (ret < 0) { - if (ret == SLICE_END) { - h261_decode_mb_skipped(h, h->current_mba, 33); - return 0; - } - av_log(s->avctx, AV_LOG_ERROR, "Error at MB: %d\n", - s->mb_x + s->mb_y * s->mb_stride); - return -1; - } - - h261_decode_mb_skipped(h, - h->current_mba - h->mba_diff, - h->current_mba - 1); - } - - return -1; -} - -/** - * returns the number of bytes consumed for building the current frame - */ -static int get_consumed_bytes(MpegEncContext *s, int buf_size) -{ - int pos = get_bits_count(&s->gb) >> 3; - if (pos == 0) - pos = 1; // avoid infinite loops (i doubt that is needed but ...) - if (pos + 10 > buf_size) - pos = buf_size; // oops ;) - - return pos; -} - -static int h261_decode_frame(AVCodecContext *avctx, AVFrame *pict, - int *got_frame, AVPacket *avpkt) -{ - H261DecContext *const h = avctx->priv_data; - const uint8_t *buf = avpkt->data; - int buf_size = avpkt->size; - MpegEncContext *s = &h->s; - int ret; - - ff_dlog(avctx, "*****frame %"PRId64" size=%d\n", avctx->frame_num, buf_size); - ff_dlog(avctx, "bytes=%x %x %x %x\n", buf[0], buf[1], buf[2], buf[3]); - - h->gob_start_code_skipped = 0; - -retry: - init_get_bits(&s->gb, buf, buf_size * 8); - - ret = h261_decode_picture_header(h); - - /* skip if the header was thrashed */ - if (ret < 0) { - av_log(s->avctx, AV_LOG_ERROR, "header damaged\n"); - return -1; - } - - if (s->width != avctx->coded_width || s->height != avctx->coded_height) { - ff_mpv_common_end(s); - } - - if (!s->context_initialized) { - if ((ret = ff_mpv_common_init(s)) < 0) - return ret; - - ret = ff_set_dimensions(avctx, s->width, s->height); - if (ret < 0) - return ret; - - goto retry; - } - - // for skipping the frame - s->current_picture.f->pict_type = s->pict_type; - s->current_picture.f->key_frame = s->pict_type == AV_PICTURE_TYPE_I; - - if ((avctx->skip_frame >= AVDISCARD_NONREF && s->pict_type == AV_PICTURE_TYPE_B) || - (avctx->skip_frame >= AVDISCARD_NONKEY && s->pict_type != AV_PICTURE_TYPE_I) || - avctx->skip_frame >= AVDISCARD_ALL) - return get_consumed_bytes(s, buf_size); - - if (ff_mpv_frame_start(s, avctx) < 0) - return -1; - - ff_mpeg_er_frame_start(s); - - /* decode each macroblock */ - s->mb_x = 0; - s->mb_y = 0; - - while (h->gob_number < (s->mb_height == 18 ? 12 : 5)) { - if (h261_resync(h) < 0) - break; - h261_decode_gob(h); - } - ff_mpv_frame_end(s); - - av_assert0(s->current_picture.f->pict_type == s->current_picture_ptr->f->pict_type); - av_assert0(s->current_picture.f->pict_type == s->pict_type); - - if ((ret = av_frame_ref(pict, s->current_picture_ptr->f)) < 0) - return ret; - ff_print_debug_info(s, s->current_picture_ptr, pict); - - *got_frame = 1; - - return get_consumed_bytes(s, buf_size); -} - -static av_cold int h261_decode_end(AVCodecContext *avctx) -{ - H261DecContext *const h = avctx->priv_data; - MpegEncContext *s = &h->s; - - ff_mpv_common_end(s); - return 0; -} - -const FFCodec ff_h261_decoder = { - .p.name = "h261", - CODEC_LONG_NAME("H.261"), - .p.type = AVMEDIA_TYPE_VIDEO, - .p.id = AV_CODEC_ID_H261, - .priv_data_size = sizeof(H261DecContext), - .init = h261_decode_init, - .close = h261_decode_end, - FF_CODEC_DECODE_CB(h261_decode_frame), - .p.capabilities = AV_CODEC_CAP_DR1, - .p.max_lowres = 3, -}; diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/libaribcaption.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/libaribcaption.c deleted file mode 100644 index 747ca8a2e47cb0eb3d2ba2e7c0a6a8a7aeb79b29..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/libaribcaption.c +++ /dev/null @@ -1,1171 +0,0 @@ -/* - * ARIB STD-B24 caption decoder using the libaribcaption library - * Copyright (c) 2022 TADANO Tokumei - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -#include "avcodec.h" -#include "codec_internal.h" -#include "internal.h" -#include "libavcodec/ass.h" -#include "libavutil/avstring.h" -#include "libavutil/avutil.h" -#include "libavutil/thread.h" -#include "libavutil/log.h" -#include "libavutil/opt.h" - -#include - -#if !defined(DEFAULT_FONT_ASS) -# define DEFAULT_FONT_ASS "sans-serif" -#endif - -#define ARIBC_BPRINT_SIZE_INIT 64 -#define ARIBC_BPRINT_SIZE_MAX (8 * 1024) -#define ARIBC_ALPHA_MAX_NUM 4 -#define ARIBC_ALPHA_DEFAULT_FRONT 0xFF -#define ARIBC_ALPHA_DEFAULT_BACK 0x80 - -#define ARIBCC_COLOR_RGB(c) ((c) & 0xFFFFFF) -#define ARIBCC_COLOR_DIFF_RGB(c1,c2) (((c1) ^ (c2)) & 0x00FFFFFF) -#define ARIBCC_COLOR_DIFF_A(c1,c2) (((c1) ^ (c2)) & 0xFF000000) - -#define CLUT_RGBA(r,g,b,a) (((unsigned)(a) << 24) | ((r) << 16) | ((g) << 8) | (b)) -#define CLUT_A(c) (((c) >> 24) & 0xFF) -#define CLUT_R(c) (((c) >> 16) & 0xFF) -#define CLUT_G(c) (((c) >> 8) & 0xFF) -#define CLUT_B(c) ( (c) & 0xFF) - -#define ARIBCC_COLOR_TO_CLUT_RGBA(c,a) (((ARIBCC_COLOR_A(c) ? ARIBCC_COLOR_A(c) : (a)) << 24) | \ - (ARIBCC_COLOR_R(c) << 16) | \ - (ARIBCC_COLOR_G(c) << 8) | \ - (ARIBCC_COLOR_B(c))) - -typedef struct ARIBCaptionContext { - AVClass *class; - AVCodecContext *avctx; - const AVPacket *avpkt; - AVSubtitle *sub; - - aribcc_context_t *context; - aribcc_decoder_t *decoder; - aribcc_renderer_t *renderer; - - int subtitle_type; - int encoding_scheme; - bool ass_single_rect; - char *font; - bool replace_fullwidth_ascii; - bool force_stroke_text; - bool ignore_background; - bool ignore_ruby; - float stroke_width; - bool replace_drcs; - - int64_t pts; - AVRational time_base; - int canvas_width; - int canvas_height; - int plane_width; - int plane_height; - int frame_width; - int frame_height; - int bitmap_plane_width; - int bitmap_plane_height; - int font_size; - int charstyle; - int border_style; - int readorder; - - aribcc_caption_t caption; - aribcc_render_result_t render_result; - uint32_t *clut; - int clut_idx; - int clut_overflow; - uint8_t clut_alpha[ARIBC_ALPHA_MAX_NUM]; -} ARIBCaptionContext; - -static void hex_dump_debug(void *ctx, const char *buf, int buf_size) -{ - int i; - - for (i = 0; i < buf_size; i++) { - ff_dlog(ctx, "%02hhx ", buf[i]); - if (i % 16 == 15) - ff_dlog(ctx, "\n"); - } - if (i % 16) - ff_dlog(ctx, "\n"); -} - -static void logcat_callback(aribcc_loglevel_t level, const char* message, void* userdata) -{ - ARIBCaptionContext *ctx = userdata; - int lvl; - - if (ctx->decoder != NULL) { - switch (level) { - case ARIBCC_LOGLEVEL_ERROR: - lvl = AV_LOG_ERROR; - break; - case ARIBCC_LOGLEVEL_WARNING: - lvl = AV_LOG_WARNING; - break; - default: - lvl = AV_LOG_INFO; - } - - av_log(ctx, lvl, "%s\n", message); - } -} - -static void estimate_video_frame_size(ARIBCaptionContext *ctx) -{ - if (ctx->avctx->width > 0 && ctx->avctx->height > 0) { - /* input video size specified by -canvas_size option */ - ctx->bitmap_plane_width = ctx->avctx->width; - ctx->bitmap_plane_height = ctx->avctx->height; - } else if (ctx->plane_width == 960) { - /* ARIB TR-B14 Fascicle 2 Volume 3 [Section 2] 4.3.1 */ - /* ARIB TR-B14 Fascicle 2 Volume 3 [Section 2] Appendix-4 */ - ctx->bitmap_plane_width = 1440; - ctx->bitmap_plane_height = 1080; - } else { - ctx->bitmap_plane_width = ctx->plane_width; - ctx->bitmap_plane_height = ctx->plane_height; - } - /* Expand either width or height */ - if (ctx->bitmap_plane_height * ctx->plane_width > ctx->bitmap_plane_width * ctx->plane_height) { - ctx->frame_height = ctx->bitmap_plane_height; - ctx->frame_width = ctx->frame_height * ctx->plane_width / ctx->plane_height; - } else { - ctx->frame_width = ctx->bitmap_plane_width; - ctx->frame_height = ctx->frame_width * ctx->plane_height / ctx->plane_width; - } -} - -static void clut_set_alpha(ARIBCaptionContext *ctx, uint8_t a) -{ - int i; - - for (i = 0; i < ARIBC_ALPHA_MAX_NUM; i++) { - if (ctx->clut_alpha[i] == 0) { - ctx->clut_alpha[i] = a; - return; - } - if (ctx->clut_alpha[i] == a) - return; - } - return; -} - -static uint8_t clut_find_nearlest_alpha(ARIBCaptionContext *ctx, uint8_t a) -{ - int i, j, d; - - if (a == 0) - return a; - d = 256; - j = 0; - for (i = 0; i < ARIBC_ALPHA_MAX_NUM; i++) { - if (ctx->clut_alpha[i] == a) - return a; - if (ctx->clut_alpha[i] == 0) - break; - if (abs((int)a - (int)ctx->clut_alpha[i]) < d) { - d = abs((int)a - (int)ctx->clut_alpha[i]); - j = i; - } - } - return ctx->clut_alpha[j]; -} - -static int clut_find(ARIBCaptionContext *ctx, uint32_t rgba) -{ - int i; - - for (i = 0; i < ctx->clut_idx; i++) { - if (ctx->clut[i] == rgba) - return i; - } - return -1; -} - -static inline int clut_color_distance(uint32_t rgba1, uint32_t rgba2) -{ - return abs((int)CLUT_R(rgba1) - (int)CLUT_R(rgba2)) + - abs((int)CLUT_G(rgba1) - (int)CLUT_G(rgba2)) + - abs((int)CLUT_B(rgba1) - (int)CLUT_B(rgba2)); -} - -static uint8_t clut_pick_or_set(ARIBCaptionContext *ctx, int r, int g, int b, int a) -{ - int c, i, d, d_min; - uint32_t rgba; - - a = clut_find_nearlest_alpha(ctx, a); - if (a == 0) - return 0; /* transparent */ - rgba = CLUT_RGBA(r,g,b,a); - - d_min = 256 * 3; - c = 0; - for (i = 0; i < ctx->clut_idx; i++) { - if (ctx->clut[i] == rgba) - return i; - if (CLUT_A(ctx->clut[i]) != a) - continue; - d = clut_color_distance(ctx->clut[i], rgba); - if (d < d_min) { - d_min = d; - c = i; - } - } - if (d_min > 3) { - if (ctx->clut_idx >= AVPALETTE_COUNT) - ctx->clut_overflow++; - else { - c = ctx->clut_idx; - ctx->clut[ctx->clut_idx++] = rgba; - } - } - return c; -} - -/* initialiaze CLUT with each character colors */ -static void clut_init(ARIBCaptionContext *ctx, aribcc_caption_region_t *region) -{ - aribcc_color_t text_color, back_color, stroke_color; - uint32_t rgba; - - ctx->clut[0] = CLUT_RGBA(0,0,0,0); /* transparent */ - ctx->clut_alpha[0] = 0xFF; - ctx->clut_idx = 1; - ctx->clut_overflow = 0; - text_color = region->chars[0].text_color; - back_color = region->chars[0].back_color; - stroke_color = region->chars[0].stroke_color; - rgba = ARIBCC_COLOR_TO_CLUT_RGBA(text_color, ARIBC_ALPHA_DEFAULT_FRONT); - ctx->clut[ctx->clut_idx++] = rgba; - clut_set_alpha(ctx, CLUT_A(rgba)); - rgba = ARIBCC_COLOR_TO_CLUT_RGBA(back_color, ARIBC_ALPHA_DEFAULT_BACK); - ctx->clut[ctx->clut_idx++] = rgba; - clut_set_alpha(ctx, CLUT_A(rgba)); - rgba = ARIBCC_COLOR_TO_CLUT_RGBA(stroke_color, ARIBC_ALPHA_DEFAULT_FRONT); - if (clut_find(ctx, rgba) < 0) { - ctx->clut[ctx->clut_idx++] = rgba; - clut_set_alpha(ctx, CLUT_A(rgba)); - } - - for (int i = 1; i < region->char_count; i++) { - if (region->chars[i].text_color != text_color) { - rgba = ARIBCC_COLOR_TO_CLUT_RGBA(region->chars[i].text_color, - ARIBC_ALPHA_DEFAULT_FRONT); - if (clut_find(ctx, rgba) < 0) { - ctx->clut[ctx->clut_idx++] = rgba; - clut_set_alpha(ctx, CLUT_A(rgba)); - } - } - if (region->chars[i].back_color != back_color) { - rgba = ARIBCC_COLOR_TO_CLUT_RGBA(region->chars[i].back_color, - ARIBC_ALPHA_DEFAULT_BACK); - if (clut_find(ctx, rgba) < 0) { - ctx->clut[ctx->clut_idx++] = rgba; - clut_set_alpha(ctx, CLUT_A(rgba)); - } - } - if (region->chars[i].stroke_color != stroke_color) { - rgba = ARIBCC_COLOR_TO_CLUT_RGBA(region->chars[i].stroke_color, - ARIBC_ALPHA_DEFAULT_FRONT); - if (clut_find(ctx, rgba) < 0) { - if (ctx->clut_idx < AVPALETTE_COUNT) - ctx->clut[ctx->clut_idx++] = rgba; - clut_set_alpha(ctx, CLUT_A(rgba)); - } - } - } -} - -/** - * aribcaption_trans_{bitmap|ass|text}_subtitle() - * - * Transfer decoded subtitle to AVSubtitle with corresponding subtitle type. - * - * @param ctx pointer to the ARIBCaptionContext - * @return > 0 number of rectangles to be displayed - * = 0 no subtitle - * < 0 error code - */ -static int aribcaption_trans_bitmap_subtitle(ARIBCaptionContext *ctx) -{ - int ret = 0; - AVSubtitle *sub = ctx->sub; - int status, rect_idx; - int old_width = ctx->frame_width; - int old_height = ctx->frame_height; - - if (ctx->caption.plane_width > 0 && ctx->caption.plane_height > 0) { - ctx->plane_width = ctx->caption.plane_width; - ctx->plane_height = ctx->caption.plane_height; - } - estimate_video_frame_size(ctx); - if (ctx->frame_width != old_width || ctx->frame_height != old_height) { - ff_dlog(ctx, "canvas: %dx%d plane: %dx%d bitmap: %dx%d frame: %dx%d\n", - ctx->avctx->width, ctx->avctx->height, - ctx->plane_width, ctx->plane_height, - ctx->bitmap_plane_width, ctx->bitmap_plane_height, - ctx->frame_width, ctx->frame_height); - if (!aribcc_renderer_set_frame_size(ctx->renderer, - ctx->frame_width, ctx->frame_height)) { - av_log(ctx, AV_LOG_ERROR, - "aribcc_renderer_set_frame_size() returned with error.\n"); - return AVERROR_EXTERNAL; - } - } - - status = aribcc_renderer_append_caption(ctx->renderer, &ctx->caption); - if (!status) { - av_log(ctx, AV_LOG_ERROR, - "aribcc_renderer_append_caption() returned with error.\n"); - return AVERROR_EXTERNAL; - } - - status = aribcc_renderer_render(ctx->renderer, ctx->pts, &ctx->render_result); - switch (status) { - case ARIBCC_RENDER_STATUS_GOT_IMAGE: - break; - - case ARIBCC_RENDER_STATUS_GOT_IMAGE_UNCHANGED: - aribcc_render_result_cleanup(&ctx->render_result); - ff_dlog(ctx, "got image unchanged\n"); - return 0; - - case ARIBCC_RENDER_STATUS_NO_IMAGE: - ff_dlog(ctx, "no image\n"); - return 0; - - case ARIBCC_RENDER_STATUS_ERROR: - av_log(ctx, AV_LOG_ERROR, - "aribcc_renderer_render() returned with error.\n"); - return AVERROR_EXTERNAL; - - default: - aribcc_render_result_cleanup(&ctx->render_result); - av_log(ctx, AV_LOG_ERROR, - "aribcc_renderer_render() returned unknown status: %d\n", status); - return AVERROR_EXTERNAL; - } - - if (!ctx->render_result.image_count || ctx->render_result.images == NULL) { - aribcc_render_result_cleanup(&ctx->render_result); - ff_dlog(ctx, "no image (%d)\n", ctx->render_result.image_count); - return 0; - } - - sub->format = 0; /* graphic */ - sub->rects = av_calloc(ctx->render_result.image_count, sizeof(*sub->rects)); - if (!sub->rects) { - ret = AVERROR(ENOMEM); - goto fail; - } - for (int i = 0; i < ctx->render_result.image_count; i++) { - sub->rects[i] = av_mallocz(sizeof(*sub->rects[i])); - if (!sub->rects[i]) { - ret = AVERROR(ENOMEM); - goto fail; - } - } - - for (rect_idx = 0; rect_idx < ctx->caption.region_count; rect_idx++) { - AVSubtitleRect *rect = sub->rects[rect_idx]; - aribcc_image_t *image = &ctx->render_result.images[rect_idx]; - int w, h, shrink_height, dst_idx; - - clut_init(ctx, &ctx->caption.regions[rect_idx]); - - rect->w = image->width * ctx->bitmap_plane_width / ctx->frame_width; - rect->h = image->height * ctx->bitmap_plane_height / ctx->frame_height; - rect->data[0] = av_mallocz(rect->w * rect->h); - if (!rect->data[0]) { - ret = AVERROR(ENOMEM); - goto fail; - } - if ((image->height != rect->h && image->width != rect->w) || - image->stride < image->width * 4 || - image->stride * image->height > image->bitmap_size) { - av_log(ctx, AV_LOG_ERROR, "Bug: unexpected rendered image: %d(%d)x%d -> %dx%d\n", - image->width, image->stride / 4, image->height, rect->w, rect->h); - ret = AVERROR_EXTERNAL; - goto fail; - } - - shrink_height = image->height != rect->h; - dst_idx = 0; - for (h = 0; h < rect->h; h++) { - for (w = 0; w < rect->w; w++) { - /* Bi-linear interpolation */ - int n, m, idx0, idx1, r, g, b, a; - if (shrink_height) { - int div_a, y0, y1; - div_a = h * ctx->frame_height; - n = ctx->bitmap_plane_height; - y0 = div_a / n; - y1 = FFMIN(y0 + 1, image->height - 1); - m = div_a - n * y0; - idx0 = image->stride * y0 + w * 4; - idx1 = image->stride * y1 + w * 4; - } else { - int div_a, x0, x1; - div_a = w * ctx->frame_width; - n = ctx->bitmap_plane_width; - x0 = div_a / n; - x1 = FFMIN(x0 + 1, image->width - 1); - m = div_a - n * x0; - idx0 = image->stride * h + x0 * 4; - idx1 = image->stride * h + x1 * 4; - } - r = (image->bitmap[idx0++] * (n - m) + image->bitmap[idx1++] * m) / n; - g = (image->bitmap[idx0++] * (n - m) + image->bitmap[idx1++] * m) / n; - b = (image->bitmap[idx0++] * (n - m) + image->bitmap[idx1++] * m) / n; - a = (image->bitmap[idx0++] * (n - m) + image->bitmap[idx1++] * m) / n; - rect->data[0][dst_idx++] = clut_pick_or_set(ctx, r, g, b, a); - } - } - rect->data[1] = av_memdup(ctx->clut, AVPALETTE_SIZE); - if (!rect->data[1]) { - ret = AVERROR(ENOMEM); - goto fail; - } - - if (ctx->avctx->profile == FF_PROFILE_ARIB_PROFILE_C) { - /* ARIB TR-B14 version 3.8 Fascicle 1-(2/2) Volume 3 [Section 4] */ - /* No position information is provided for profile C */ - rect->x = (ctx->frame_width - rect->w) / 2; - rect->y = ctx->frame_height - rect->h * (ctx->caption.region_count - rect_idx); - } else { - rect->x = image->dst_x * ctx->bitmap_plane_width / ctx->frame_width; - rect->y = image->dst_y * ctx->bitmap_plane_height / ctx->frame_height; - } - rect->type = SUBTITLE_BITMAP; - rect->linesize[0] = rect->w; - rect->nb_colors = 256; - - ff_dlog(ctx, "BITMAP subtitle%s (%d,%d) %dx%d -> (%d,%d) %dx%d [%d]: %d colors\n", - (ctx->caption.regions[rect_idx].is_ruby) ? " (ruby)" : "", - image->dst_x, image->dst_y, image->width, image->height, - rect->x, rect->y, rect->w, rect->h, - rect_idx, ctx->clut_idx); - if (ctx->clut_overflow) - av_log(ctx, AV_LOG_WARNING, "CLUT overflow (%d).\n", ctx->clut_overflow); - } - sub->num_rects = rect_idx; - - return rect_idx; - -fail: - if (sub->rects) { - for (int i = 0; i < ctx->caption.region_count; i++) { - if (sub->rects[i]) { - av_freep(&sub->rects[i]->data[0]); - av_freep(&sub->rects[i]->data[1]); - av_freep(&sub->rects[i]); - } - } - av_freep(&sub->rects); - } - sub->num_rects = 0; - - return ret; -} - -static int set_ass_header(ARIBCaptionContext *ctx) -{ - AVCodecContext *avctx = ctx->avctx; - int outline, shadow; - const char *font_name; - const char *fonts = ctx->font; - - if (ctx->border_style == 4) { - outline = 0; - shadow = 4; - } else { - outline = 1; - shadow = 0; - } - if (ctx->force_stroke_text) - outline = (int)(ctx->stroke_width * 4.0 / 3.0); - - if (fonts && *fonts) - font_name = av_get_token(&fonts, ","); - else - font_name = av_strdup(DEFAULT_FONT_ASS); - if (!font_name) - return AVERROR(ENOMEM); - - av_freep(&avctx->subtitle_header); - avctx->subtitle_header = av_asprintf( - "[Script Info]\r\n" - "ScriptType: v4.00+\r\n" - "PlayResX: %d\r\n" - "PlayResY: %d\r\n" - "WrapStyle: 2\r\n" /* 2: no word wrapping */ - "\r\n" - - "[V4+ Styles]\r\n" - "Format: Name, " - "Fontname, Fontsize, " - "PrimaryColour, SecondaryColour, OutlineColour, BackColour, " - "Bold, Italic, Underline, StrikeOut, " - "ScaleX, ScaleY, " - "Spacing, Angle, " - "BorderStyle, Outline, Shadow, " - "Alignment, MarginL, MarginR, MarginV, " - "Encoding\r\n" - - "Style: " - "Default," /* Name */ - "%s,%d," /* Font{name,size} */ - "&H%x,&H%x,&H%x,&H%x," /* {Primary,Secondary,Outline,Back}Colour */ - "%d,%d,%d,0," /* Bold, Italic, Underline, StrikeOut */ - "100,100," /* Scale{X,Y} */ - "0,0," /* Spacing, Angle */ - "%d,%d,%d," /* BorderStyle, Outline, Shadow */ - "%d,10,10,10," /* Alignment, Margin[LRV] */ - "0\r\n" /* Encoding */ - "\r\n" - - "[Events]\r\n" - "Format: Layer, Start, End, Style, Name, MarginL, MarginR, MarginV, Effect, Text\r\n", - ctx->plane_width, ctx->plane_height, - font_name, ctx->font_size, - ASS_DEFAULT_COLOR, ASS_DEFAULT_COLOR, - ASS_DEFAULT_BACK_COLOR, ASS_DEFAULT_BACK_COLOR, - -ASS_DEFAULT_BOLD, -ASS_DEFAULT_ITALIC, -ASS_DEFAULT_UNDERLINE, - ctx->border_style, outline, shadow, ASS_DEFAULT_ALIGNMENT); - - av_freep(&font_name); - if (!avctx->subtitle_header) - return AVERROR(ENOMEM); - avctx->subtitle_header_size = strlen(avctx->subtitle_header); - return 0; -} - -static void set_ass_color(AVBPrint *buf, int color_num, - aribcc_color_t new_color, aribcc_color_t old_color) -{ - if (ARIBCC_COLOR_DIFF_RGB(new_color, old_color)) - av_bprintf(buf, "{\\%dc&H%06x&}", color_num, - ARIBCC_COLOR_RGB(new_color)); - if (ARIBCC_COLOR_DIFF_A(new_color, old_color)) - av_bprintf(buf, "{\\%da&H%02x&}", color_num, - 0xFF - ARIBCC_COLOR_A(new_color)); -} - -static int aribcaption_trans_ass_subtitle(ARIBCaptionContext *ctx) -{ - AVSubtitle *sub = ctx->sub; - AVBPrint buf; - bool single_rect = ctx->ass_single_rect; - int ret = 0, rect_idx; - - if (ctx->caption.plane_width > 0 && ctx->caption.plane_height > 0 && - (ctx->caption.plane_width != ctx->plane_width || - ctx->caption.plane_height != ctx->plane_height)) { - ctx->plane_width = ctx->caption.plane_width; - ctx->plane_height = ctx->caption.plane_height; - if ((ret = set_ass_header(ctx)) < 0) - return ret; - } - - /* ARIB TR-B14 version 3.8 Fascicle 1-(2/2) Volume 3 [Section 4] */ - /* No position information is provided for profile C */ - if (ctx->avctx->profile == FF_PROFILE_ARIB_PROFILE_C) - single_rect = true; - - sub->format = 1; /* text */ - if (ctx->caption.region_count == 0) { - /* clear previous caption for indefinite duration */ - ff_ass_add_rect(sub, "", ctx->readorder++, 0, NULL, NULL); - return 1; - } - - av_bprint_init(&buf, ARIBC_BPRINT_SIZE_INIT, ARIBC_BPRINT_SIZE_MAX); - - if (single_rect && ctx->avctx->profile != FF_PROFILE_ARIB_PROFILE_C) { - int x, y, rx, ry; - x = ctx->plane_width; - y = ctx->plane_height; - for (int i = 0; i < ctx->caption.region_count; i++) { - rx = ctx->caption.regions[i].x; - ry = ctx->caption.regions[i].y; - if (rx < x) - x = rx; - if (ry < y) - y = ry; - } - av_bprintf(&buf, "{\\an7}"); - if (y < 0) - y += ctx->plane_height; - if (x > 0 || y > 0) - av_bprintf(&buf, "{\\pos(%d,%d)}", x, y); - } - - rect_idx = 0; - for (int i = 0; i < ctx->caption.region_count; i++) { - aribcc_caption_region_t *region = &ctx->caption.regions[i]; - aribcc_color_t text_color = ARIBCC_MAKE_RGBA(0xFF, 0xFF, 0xFF, - ARIBC_ALPHA_DEFAULT_FRONT); - aribcc_color_t stroke_color = ARIBCC_MAKE_RGBA(0, 0, 0, - ARIBC_ALPHA_DEFAULT_FRONT); - aribcc_color_t back_color = ARIBCC_MAKE_RGBA(0, 0, 0, - ARIBC_ALPHA_DEFAULT_BACK); - aribcc_charstyle_t charstyle = ctx->charstyle; - int char_width = ctx->font_size; - int char_height = ctx->font_size; - int char_horizontal_spacing = 0; - - if (region->is_ruby && ctx->ignore_ruby) - continue; - - if (!single_rect) { - int x = region->x; - int y = region->y; - if (x < 0) - x += ctx->plane_width; - if (y < 0) - y += ctx->plane_height; - av_bprint_clear(&buf); - av_bprintf(&buf, "{\\an7}"); - if (x > 0 || y > 0) - av_bprintf(&buf, "{\\pos(%d,%d)}", x, y); - } - if (region->is_ruby) - av_bprintf(&buf, "{\\fs%d}", char_height / 2); - - for (int j = 0; j < region->char_count; j++) { - aribcc_caption_char_t *ch = ®ion->chars[j]; - - if (ctx->avctx->profile != FF_PROFILE_ARIB_PROFILE_C) { - if (ch->char_horizontal_spacing != char_horizontal_spacing) { - av_bprintf(&buf, "{\\fsp%d}", (region->is_ruby) ? - ch->char_horizontal_spacing / 2 : - ch->char_horizontal_spacing); - char_horizontal_spacing = ch->char_horizontal_spacing; - } - if (ch->char_width != char_width) { - av_bprintf(&buf, "{\\fscx%"PRId64"}", - av_rescale(ch->char_width, 100, ctx->font_size)); - char_width = ch->char_width; - } - if (ch->char_height != char_height) { - av_bprintf(&buf, "{\\fscy%"PRId64"}", - av_rescale(ch->char_height, 100, ctx->font_size)); - char_height = ch->char_height; - } - } - if (ch->style != charstyle) { - aribcc_charstyle_t diff = ch->style ^ charstyle; - if (diff & ARIBCC_CHARSTYLE_STROKE) { - if (charstyle & ARIBCC_CHARSTYLE_STROKE) { - if (ctx->force_stroke_text) - av_bprintf(&buf, "{\\bord%d}", - (int)(ctx->stroke_width * 4.0 / 3.0)); - else - av_bprintf(&buf, "{\\bord0}"); - } else - av_bprintf(&buf, "{\\bord3}"); - } - if (diff & ARIBCC_CHARSTYLE_BOLD) { - if (charstyle & ARIBCC_CHARSTYLE_BOLD) - av_bprintf(&buf, "{\\b0}"); - else - av_bprintf(&buf, "{\\b1}"); - } - if (diff & ARIBCC_CHARSTYLE_ITALIC) { - if (charstyle & ARIBCC_CHARSTYLE_ITALIC) - av_bprintf(&buf, "{\\i0}"); - else - av_bprintf(&buf, "{\\i1}"); - } - if (diff & ARIBCC_CHARSTYLE_UNDERLINE) { - if (charstyle & ARIBCC_CHARSTYLE_UNDERLINE) - av_bprintf(&buf, "{\\u0}"); - else - av_bprintf(&buf, "{\\u1}"); - } - charstyle = ch->style; - } - if (ch->text_color != text_color) { - set_ass_color(&buf, 1, ch->text_color, text_color); - text_color = ch->text_color; - } - if (ch->stroke_color != stroke_color) { - set_ass_color(&buf, 3, ch->stroke_color, stroke_color); - stroke_color = ch->stroke_color; - } - if (ch->back_color != back_color) { - if (ctx->border_style == 4) - set_ass_color(&buf, 4, ch->back_color, back_color); - else - set_ass_color(&buf, 3, ch->back_color, back_color); - back_color = ch->back_color; - } - if (region->chars[j].type == ARIBCC_CHARTYPE_DRCS) - av_bprintf(&buf, "\xe3\x80\x93"); /* Geta Mark */ - else - ff_ass_bprint_text_event(&buf, ch->u8str, strlen(ch->u8str), "", 0); - } - - if (single_rect) { - if (i + 1 < ctx->caption.region_count) - av_bprintf(&buf, "{\\r}\\N"); - ff_dlog(ctx, "ASS subtitle%s (%d,%d) %dx%d [%d]\n", - (region->is_ruby) ? " (ruby)" : "", - region->x, region->y, region->width, region->height, - rect_idx); - } else { - if (!av_bprint_is_complete(&buf)) { - ret = AVERROR(ENOMEM); - goto fail; - } - ff_dlog(ctx, "ASS subtitle%s (%d,%d) %dx%d [%d]: %s\n", - (region->is_ruby) ? " (ruby)" : "", - region->x, region->y, region->width, region->height, - rect_idx, buf.str); - - ret = ff_ass_add_rect(sub, buf.str, ctx->readorder++, 0 , NULL, NULL); - if (ret != 0) - goto fail; - rect_idx++; - } - } - if (single_rect) { - if (!av_bprint_is_complete(&buf)) { - ret = AVERROR(ENOMEM); - goto fail; - } - ff_dlog(ctx, "ASS subtitle: %s\n", buf.str); - - ret = ff_ass_add_rect(sub, buf.str, ctx->readorder++, 0 , NULL, NULL); - if (ret != 0) - goto fail; - rect_idx++; - } - - av_bprint_finalize(&buf, NULL); - return rect_idx; - -fail: - if (sub->rects) { - for (int i = 0; i < ctx->caption.region_count; i++) { - if (sub->rects[i]) { - av_freep(&sub->rects[i]->ass); - av_freep(&sub->rects[i]); - } - } - av_freep(&sub->rects); - } - sub->num_rects = 0; - av_bprint_finalize(&buf, NULL); - - return ret; -} - -static int aribcaption_trans_text_subtitle(ARIBCaptionContext *ctx) -{ - AVSubtitle *sub = ctx->sub; - AVSubtitleRect *rect; - int ret = 0; - const char *text; - - sub->rects = av_calloc(ctx->caption.region_count, sizeof(*sub->rects)); - if (!sub->rects) { - ret = AVERROR(ENOMEM); - goto fail; - } - sub->num_rects = 1; - - sub->rects[0] = av_mallocz(sizeof(*sub->rects[0])); - if (!sub->rects[0]) { - ret = AVERROR(ENOMEM); - goto fail; - } - rect = sub->rects[0]; - - if (ctx->caption.region_count == 0) - text = ""; /* clear previous caption */ - else { - text = ctx->caption.text; - ff_dlog(ctx, "TEXT subtitle: %s\n", text); - } - rect->text = av_strdup(text); - if (!rect->text) { - ret = AVERROR(ENOMEM); - goto fail; - } - - sub->format = 1; /* text */ - rect->type = SUBTITLE_TEXT; - - return 1; - -fail: - if (sub->rects) { - rect = sub->rects[0]; - if (rect) { - av_freep(&rect->text); - av_freep(&rect); - } - av_freep(&sub->rects); - } - sub->num_rects = 0; - - return ret; -} - -static int aribcaption_decode(AVCodecContext *avctx, AVSubtitle *sub, - int *got_sub_ptr, const AVPacket *avpkt) -{ - ARIBCaptionContext *ctx = avctx->priv_data; - int status; - - ff_dlog(ctx, "ARIB caption packet pts=%"PRIx64":\n", avpkt->pts); - if (sub->num_rects) { - avpriv_request_sample(ctx, "Different Version of Segment asked Twice"); - return AVERROR_PATCHWELCOME; - } - hex_dump_debug(ctx, avpkt->data, avpkt->size); - - ctx->sub = sub; - ctx->avpkt = avpkt; - ctx->time_base = avctx->pkt_timebase; - if (ctx->time_base.num <= 0 || ctx->time_base.den <= 0) { - av_log(ctx, AV_LOG_VERBOSE, "No timebase set. assuming 90kHz.\n"); - ctx->time_base = av_make_q(1, 90000); - } - if (avpkt->pts == AV_NOPTS_VALUE) - ctx->pts = ARIBCC_PTS_NOPTS; - else - ctx->pts = av_rescale_q(avpkt->pts, ctx->time_base, (AVRational){1, 1000}); - - status = aribcc_decoder_decode(ctx->decoder, avpkt->data, avpkt->size, - ctx->pts, &ctx->caption); - if (status == ARIBCC_DECODE_STATUS_ERROR) { - av_log(ctx, AV_LOG_ERROR, - "aribcc_decoder_decode() returned with error.\n"); - return AVERROR(EAGAIN); - } - if (status == ARIBCC_DECODE_STATUS_NO_CAPTION) { - ff_dlog(ctx, "No caption.\n"); - return avpkt->size; - } else { - ff_dlog(ctx, "type=%02x, flags=%x, lang=%03x\n", - ctx->caption.type, ctx->caption.flags, ctx->caption.iso6392_language_code); - ff_dlog(ctx, "region count = %d, start=%d.%d, duration=%d.%d\n", - ctx->caption.region_count, - (int)(ctx->caption.pts / 1000), (int)(ctx->caption.pts % 1000), - (int)((ctx->caption.wait_duration == ARIBCC_DURATION_INDEFINITE) ? - -1 : ctx->caption.wait_duration / 1000), - (int)((ctx->caption.wait_duration == ARIBCC_DURATION_INDEFINITE) ? - 0 : ctx->caption.wait_duration % 1000)); - } - - switch ((enum AVSubtitleType) ctx->subtitle_type) { - case SUBTITLE_TEXT: - status = aribcaption_trans_text_subtitle(ctx); - break; - - case SUBTITLE_ASS: - status = aribcaption_trans_ass_subtitle(ctx); - break; - - case SUBTITLE_BITMAP: - status = aribcaption_trans_bitmap_subtitle(ctx); - break; - - case SUBTITLE_NONE: - default: - status = 0; - } - - if (status < 0) { - av_log(ctx, AV_LOG_ERROR, "Failed to set Subtitle: %s\n", - av_err2str(status)); - aribcc_caption_cleanup(&ctx->caption); - return status; - } - if (status > 0) { - *got_sub_ptr = 1; - if (ctx->avpkt->pts != AV_NOPTS_VALUE) - sub->pts = av_rescale_q(ctx->avpkt->pts, - ctx->time_base, AV_TIME_BASE_Q); - if (ctx->caption.wait_duration == ARIBCC_DURATION_INDEFINITE) - sub->end_display_time = UINT32_MAX; - else - sub->end_display_time = (uint32_t)ctx->caption.wait_duration; - } - - aribcc_caption_cleanup(&ctx->caption); - return avpkt->size; -} - -static void aribcaption_flush(AVCodecContext *avctx) -{ - ARIBCaptionContext *ctx = avctx->priv_data; - - if (ctx->decoder) - aribcc_decoder_flush(ctx->decoder); - if (ctx->renderer) - aribcc_renderer_flush(ctx->renderer); - if (!(avctx->flags2 & AV_CODEC_FLAG2_RO_FLUSH_NOOP)) - ctx->readorder = 0; -} - -static int aribcaption_close(AVCodecContext *avctx) -{ - ARIBCaptionContext *ctx = avctx->priv_data; - - av_freep(&ctx->clut); - if (ctx->renderer) - aribcc_renderer_free(ctx->renderer); - if (ctx->decoder) - aribcc_decoder_free(ctx->decoder); - if (ctx->context) - aribcc_context_free(ctx->context); - - return 0; -} - -static int aribcaption_init(AVCodecContext *avctx) -{ - ARIBCaptionContext *ctx = avctx->priv_data; - aribcc_profile_t profile; - int ret = 0; - - ctx->avctx = avctx; - - switch (avctx->profile) { - case FF_PROFILE_ARIB_PROFILE_A: - profile = ARIBCC_PROFILE_A; - /* assume 960x540 at initial state */ - ctx->plane_width = 960; - ctx->plane_height = 540; - ctx->font_size = 36; - break; - case FF_PROFILE_ARIB_PROFILE_C: - profile = ARIBCC_PROFILE_C; - ctx->plane_width = 320; - ctx->plane_height = 180; - ctx->font_size = 16; - break; - default: - av_log(avctx, AV_LOG_ERROR, "Unknown or unsupported profile set.\n"); - return AVERROR(EINVAL); - } - /* determine BorderStyle of ASS header */ - if (ctx->ignore_background) - ctx->border_style = 1; - else - ctx->border_style = 4; - ctx->charstyle = ARIBCC_CHARSTYLE_DEFAULT; - if (ctx->force_stroke_text || ctx->ignore_background) - ctx->charstyle |= ARIBCC_CHARSTYLE_STROKE; - - if (!(ctx->context = aribcc_context_alloc())) { - av_log(avctx, AV_LOG_ERROR, "Failed to alloc libaribcaption context.\n"); - return AVERROR_EXTERNAL; - } - aribcc_context_set_logcat_callback(ctx->context, logcat_callback, avctx); - if (!(ctx->decoder = aribcc_decoder_alloc(ctx->context))) { - av_log(avctx, AV_LOG_ERROR, "Failed to alloc libaribcaption decoder.\n"); - return AVERROR_EXTERNAL; - } - if (!aribcc_decoder_initialize(ctx->decoder, - (enum aribcc_encoding_scheme_t) ctx->encoding_scheme, - ARIBCC_CAPTIONTYPE_CAPTION, - profile, - ARIBCC_LANGUAGEID_FIRST)) { - av_log(avctx, AV_LOG_ERROR, "Failed to initialize libaribcaption decoder.\n"); - return AVERROR_EXTERNAL; - } - aribcc_decoder_set_replace_msz_fullwidth_ascii(ctx->decoder, - ctx->replace_fullwidth_ascii); - - /* Similar behavior as ffmpeg tool to set canvas size */ - if (ctx->canvas_width > 0 && ctx->canvas_height > 0 && - (ctx->avctx->width == 0 || ctx->avctx->height == 0)) { - ctx->avctx->width = ctx->canvas_width; - ctx->avctx->height = ctx->canvas_height; - } - - switch ((enum AVSubtitleType) ctx->subtitle_type) { - case SUBTITLE_ASS: - ret = set_ass_header(ctx); - if (ret != 0) { - av_log(avctx, AV_LOG_ERROR, "Failed to set ASS header: %s\n", - av_err2str(ret)); - return ret; - } - break; - - case SUBTITLE_BITMAP: - if(!(ctx->renderer = aribcc_renderer_alloc(ctx->context))) { - av_log(avctx, AV_LOG_ERROR, "Failed to alloc libaribcaption renderer.\n"); - return AVERROR_EXTERNAL; - } - if(!aribcc_renderer_initialize(ctx->renderer, - ARIBCC_CAPTIONTYPE_CAPTION, - ARIBCC_FONTPROVIDER_TYPE_AUTO, - ARIBCC_TEXTRENDERER_TYPE_AUTO)) { - av_log(avctx, AV_LOG_ERROR, "Failed to initialize libaribcaption renderer.\n"); - return AVERROR_EXTERNAL; - } - estimate_video_frame_size(ctx); - ff_dlog(ctx, "canvas: %dx%d plane: %dx%d bitmap: %dx%d frame: %dx%d\n", - ctx->avctx->width, ctx->avctx->height, - ctx->plane_width, ctx->plane_height, - ctx->bitmap_plane_width, ctx->bitmap_plane_height, - ctx->frame_width, ctx->frame_height); - if (!aribcc_renderer_set_frame_size(ctx->renderer, - ctx->frame_width, ctx->frame_height)) { - av_log(ctx, AV_LOG_ERROR, - "aribcc_renderer_set_frame_size() returned with error.\n"); - return AVERROR_EXTERNAL; - } - - if (!(ctx->clut = av_mallocz(AVPALETTE_SIZE))) - return AVERROR(ENOMEM); - - aribcc_renderer_set_storage_policy(ctx->renderer, ARIBCC_CAPTION_STORAGE_POLICY_MINIMUM, 0); - aribcc_renderer_set_replace_drcs(ctx->renderer, ctx->replace_drcs); - aribcc_renderer_set_force_stroke_text(ctx->renderer, ctx->force_stroke_text); - aribcc_renderer_set_force_no_background(ctx->renderer, ctx->ignore_background); - aribcc_renderer_set_force_no_ruby(ctx->renderer, ctx->ignore_ruby); - aribcc_renderer_set_stroke_width(ctx->renderer, ctx->stroke_width); - if (ctx->font) { - int is_nomem = 0; - size_t count = 0; - const char **font_families = NULL; - const char *fonts = ctx->font; - - while (*fonts) { - const char **ff = av_realloc_array(font_families, count + 1, sizeof(*font_families)); - if (!ff) { - is_nomem = 1; - break; - } else { - font_families = ff; - ff[count++] = av_get_token(&fonts, ","); - if (!ff[count - 1]) { - is_nomem = 1; - break; - } else if (*fonts) - fonts++; - } - } - if (!is_nomem && count) - aribcc_renderer_set_default_font_family(ctx->renderer, font_families, count, true); - while (count) - av_freep(&font_families[--count]); - av_freep(&font_families); - if (is_nomem) - return AVERROR(ENOMEM); - } - break; - - case SUBTITLE_TEXT: - case SUBTITLE_NONE: - default: - /* do nothing */ ; - } - - ctx->readorder = 0; - - return 0; -} - -#if !defined(ASS_SINGLE_RECT) -# define ASS_SINGLE_RECT 0 -#endif - -#define OFFSET(x) offsetof(ARIBCaptionContext, x) -#define SD AV_OPT_FLAG_SUBTITLE_PARAM | AV_OPT_FLAG_DECODING_PARAM -static const AVOption options[] = { - { "sub_type", "subtitle rendering type", - OFFSET(subtitle_type), AV_OPT_TYPE_INT, - { .i64 = SUBTITLE_ASS }, SUBTITLE_NONE, SUBTITLE_ASS, SD, "type" }, - { "none", "do nothing", 0, AV_OPT_TYPE_CONST, - { .i64 = SUBTITLE_NONE }, .flags = SD, .unit = "type" }, - { "bitmap", "bitmap rendering", 0, AV_OPT_TYPE_CONST, - { .i64 = SUBTITLE_BITMAP }, .flags = SD, .unit = "type" }, - { "text", "plain text", 0, AV_OPT_TYPE_CONST, - { .i64 = SUBTITLE_TEXT }, .flags = SD, .unit = "type" }, - { "ass", "formatted text", 0, AV_OPT_TYPE_CONST, - { .i64 = SUBTITLE_ASS }, .flags = SD, .unit = "type" }, - { "caption_encoding", "encoding scheme of subtitle text", - OFFSET(encoding_scheme), AV_OPT_TYPE_INT, { .i64 = ARIBCC_ENCODING_SCHEME_AUTO }, - ARIBCC_ENCODING_SCHEME_AUTO, ARIBCC_ENCODING_SCHEME_ABNT_NBR_15606_1_LATIN, SD, "encoding" }, - { "auto", "automatically detect encoding scheme", 0, AV_OPT_TYPE_CONST, - { .i64 = ARIBCC_ENCODING_SCHEME_AUTO }, .flags = SD, .unit = "encoding" }, - { "jis", "8bit-char JIS encoding (Japanese ISDB captions)", 0, AV_OPT_TYPE_CONST, - { .i64 = ARIBCC_ENCODING_SCHEME_ARIB_STD_B24_JIS }, .flags = SD, .unit = "encoding" }, - { "utf8", "UTF-8 encoding (Philippines ISDB-T captions)", 0, AV_OPT_TYPE_CONST, - { .i64 = ARIBCC_ENCODING_SCHEME_ARIB_STD_B24_UTF8 }, .flags = SD, .unit = "encoding" }, - { "latin", "latin characters (SBTVD / ISDB-Tb captions used in South America)", 0, AV_OPT_TYPE_CONST, - { .i64 = ARIBCC_ENCODING_SCHEME_ABNT_NBR_15606_1_LATIN }, .flags = SD, .unit = "encoding" }, - { "ass_single_rect", "workaround of ASS subtitle for players which can't handle multi-rectangle [ass]", - OFFSET(ass_single_rect), AV_OPT_TYPE_BOOL, { .i64 = ASS_SINGLE_RECT }, 0, 1, SD }, - { "font", "comma-separated font family [ass, bitmap]", - OFFSET(font), AV_OPT_TYPE_STRING, { .str = NULL }, 0, 0, SD }, - { "replace_fullwidth_ascii", "replace MSZ fullwidth alphanumerics with halfwidth alphanumerics [ass, bitmap]", - OFFSET(replace_fullwidth_ascii), AV_OPT_TYPE_BOOL, { .i64 = 1 }, 0, 1, SD }, - { "force_outline_text", "always render characters with outline [(ass), bitmap]", - OFFSET(force_stroke_text), AV_OPT_TYPE_BOOL, { .i64 = 0 }, 0, 1, SD }, - { "ignore_background", "ignore rendering caption background [(ass), bitmap]", - OFFSET(ignore_background), AV_OPT_TYPE_BOOL, { .i64 = 0 }, 0, 1, SD }, - { "ignore_ruby", "ignore ruby-like characters [ass, bitmap]", - OFFSET(ignore_ruby), AV_OPT_TYPE_BOOL, { .i64 = 0 }, 0, 1, SD }, - { "outline_width", "outline width of text [(ass), bitmap]", - OFFSET(stroke_width), AV_OPT_TYPE_FLOAT, { .dbl = 1.5 }, 0.0, 3.0, SD }, - { "replace_drcs", "replace known DRCS [bitmap]", - OFFSET(replace_drcs), AV_OPT_TYPE_BOOL, { .i64 = 1 }, 0, 1, SD }, - {"canvas_size", "set input video size (WxH or abbreviation) [bitmap]", - OFFSET(canvas_width), AV_OPT_TYPE_IMAGE_SIZE, { .str = NULL }, 0, INT_MAX, SD }, - { NULL } -}; - -static const AVClass aribcaption_class = { - .class_name = "aribcaption decoder", - .item_name = av_default_item_name, - .option = options, - .version = LIBAVUTIL_VERSION_INT, -}; - -const FFCodec ff_libaribcaption_decoder = { - .p.name = "libaribcaption", - .p.long_name = NULL_IF_CONFIG_SMALL("ARIB STD-B24 caption decoder"), - .p.type = AVMEDIA_TYPE_SUBTITLE, - .p.id = AV_CODEC_ID_ARIB_CAPTION, - .priv_data_size = sizeof(ARIBCaptionContext), - .init = aribcaption_init, - .close = aribcaption_close, - FF_CODEC_DECODE_SUB_CB(aribcaption_decode), - .flush = aribcaption_flush, - .p.priv_class = &aribcaption_class, - .caps_internal = FF_CODEC_CAP_INIT_CLEANUP, -}; diff --git a/spaces/colornative/goofyai-3d_render_style_xl/README.md b/spaces/colornative/goofyai-3d_render_style_xl/README.md deleted file mode 100644 index e30a88c2650953973bd8aaf623fa3e7bbd462b31..0000000000000000000000000000000000000000 --- a/spaces/colornative/goofyai-3d_render_style_xl/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Goofyai-3d Render Style Xl -emoji: 🦀 -colorFrom: purple -colorTo: indigo -sdk: gradio -sdk_version: 3.44.4 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/congsaPfin/Manga-OCR/logs/Become a Hoard Master with this Amazing APK for Android Devices.md b/spaces/congsaPfin/Manga-OCR/logs/Become a Hoard Master with this Amazing APK for Android Devices.md deleted file mode 100644 index c3d6c9a0c3b17080d622de05e448f9aa8eca1641..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Become a Hoard Master with this Amazing APK for Android Devices.md +++ /dev/null @@ -1,98 +0,0 @@ - -

                                                                                                                                              Hoard Master APK: A Fun and Satisfying Hole Idle Game

                                                                                                                                              -

                                                                                                                                              Do you love io games that let you devour everything in sight? Do you enjoy idle games that reward you with money and upgrades? If you answered yes to both questions, then you will love Hoard Master APK, a game that combines the best of both worlds. In this game, you are a black hole with no limits. You can swallow different items from tiny to massive and push them out to turn them into money. You can use the money to upgrade your hole, get bigger, and shred what you ate. You can consume everything in the city because you can. Hoard Master APK is one of the most fun and satisfying games that you can play on your Android device. In this article, we will tell you more about this game, its features, how to download and install it, and some tips and tricks for playing it.

                                                                                                                                              -

                                                                                                                                              What is Hoard Master APK?

                                                                                                                                              -

                                                                                                                                              Hoard Master APK is an arcade hole idle game developed by Rollic Games, a popular game studio that has created many hit games such as Go Knots 3D, Tangle Master 3D, Picker 3D, and more. Hoard Master APK is one of their latest games that has gained over 5 million downloads on Google Play Store. The game is rated 4.1 out of 5 stars by more than 30 thousand users who have enjoyed playing it.

                                                                                                                                              -

                                                                                                                                              hoard master apk


                                                                                                                                              DOWNLOADhttps://urlca.com/2uOct3



                                                                                                                                              -

                                                                                                                                              In Hoard Master APK, you are a black hole that can eat anything and everything in the city. You can swallow cars, buildings, trees, animals, people, and more. The more you eat, the bigger you get. The bigger you get, the more money you make. The more money you make, the more upgrades you can buy. You can upgrade your size, speed, capacity, and power to become the ultimate hoard master.

                                                                                                                                              -

                                                                                                                                              Features of Hoard Master APK

                                                                                                                                              -

                                                                                                                                              Hoard Master APK is a game that will change your hole life. It has many features that make it fun and addictive to play. Here are some of them:

                                                                                                                                              -

                                                                                                                                              Swallow everything in the city

                                                                                                                                              -

                                                                                                                                              The main feature of Hoard Master APK is that you can swallow anything and everything in the city. You can start with small items like coins, trash cans, mailboxes, and bicycles. Then you can move on to bigger items like cars, trucks, buses, and trains. Finally, you can eat huge items like buildings, bridges, monuments, and even mountains. There is no limit to what you can eat in this game.

                                                                                                                                              -

                                                                                                                                              Upgrade your hole and make money

                                                                                                                                              -

                                                                                                                                              Another feature of Hoard Master APK is that you can upgrade your hole and make money from what you eat. Every time you swallow something, you will push it out as money. You can use the money to buy various upgrades for your hole. You can increase your size to eat bigger items faster. You can increase your speed to move around the city quicker. You can increase your capacity to hold more items in your hole. And you can increase your power to shred what you ate into smaller pieces.

                                                                                                                                              -

                                                                                                                                              Enjoy satisfying graphics and sound effects

                                                                                                                                              -

                                                                                                                                              A third feature of Hoard Master APK is that it has satisfying graphics and sound effects that enhance your gaming experience. The game has colorful and cartoonish graphics that make the city look lively and vibrant. The game also has realistic and funny sound effects that make the eating process more enjoyable. You can hear the crunching, munching, popping, and exploding sounds as you devour everything in sight.

                                                                                                                                              -

                                                                                                                                              hoard master apk download
                                                                                                                                              -hoard master apk mod
                                                                                                                                              -hoard master apk latest version
                                                                                                                                              -hoard master apk free
                                                                                                                                              -hoard master apk android
                                                                                                                                              -hoard master apk game
                                                                                                                                              -hoard master apk offline
                                                                                                                                              -hoard master apk update
                                                                                                                                              -hoard master apk hack
                                                                                                                                              -hoard master apk unlimited money
                                                                                                                                              -hoard master apk for pc
                                                                                                                                              -hoard master apk online
                                                                                                                                              -hoard master apk no ads
                                                                                                                                              -hoard master apk full version
                                                                                                                                              -hoard master apk old version
                                                                                                                                              -hoard master apk 2023
                                                                                                                                              -hoard master apk xapk
                                                                                                                                              -hoard master apk review
                                                                                                                                              -hoard master apk cheats
                                                                                                                                              -hoard master apk gameplay
                                                                                                                                              -hoard master apk tips and tricks
                                                                                                                                              -hoard master apk size
                                                                                                                                              -hoard master apk requirements
                                                                                                                                              -hoard master apk features
                                                                                                                                              -hoard master apk guide
                                                                                                                                              -hoard master apk best settings
                                                                                                                                              -hoard master apk how to play
                                                                                                                                              -hoard master apk walkthrough
                                                                                                                                              -hoard master apk levels
                                                                                                                                              -hoard master apk challenges
                                                                                                                                              -hoard master apk rewards
                                                                                                                                              -hoard master apk skins
                                                                                                                                              -hoard master apk characters
                                                                                                                                              -hoard master apk items
                                                                                                                                              -hoard master apk graphics
                                                                                                                                              -hoard master apk sound effects
                                                                                                                                              -hoard master apk music
                                                                                                                                              -hoard master apk fun factor
                                                                                                                                              -hoard master apk rating
                                                                                                                                              -hoard master apk similar games
                                                                                                                                              -arcade hole:hoard master apk
                                                                                                                                              -hole.io:hoard master apk
                                                                                                                                              -black hole:hoard master apk
                                                                                                                                              -hole simulator:hoard master apk
                                                                                                                                              -hole.io 2:hoard master apk
                                                                                                                                              -hole.io 3d:hoard master apk
                                                                                                                                              -hole.io online:hoard master apk
                                                                                                                                              -hole.io offline:hoard master apk
                                                                                                                                              -hole.io multiplayer:hoard master apk
                                                                                                                                              -hole.io vs city:hoard master apk

                                                                                                                                              -

                                                                                                                                              How to download and install Ho ard Master APK?

                                                                                                                                              -

                                                                                                                                              If you want to download and install Hoard Master APK on your Android device, you will need to follow these simple steps:

                                                                                                                                              -

                                                                                                                                              Download the APK file from a trusted source

                                                                                                                                              -

                                                                                                                                              The first step is to download the APK file of Hoard Master APK from a trusted source. You can use the link below to get the latest version of the game. The file size is about 80 MB, so make sure you have enough space on your device.

                                                                                                                                              -

                                                                                                                                              Download Hoard Master APK

                                                                                                                                              -

                                                                                                                                              Enable unknown sources on your device

                                                                                                                                              -

                                                                                                                                              The second step is to enable unknown sources on your device. This will allow you to install apps that are not from the Google Play Store. To do this, go to your device settings and look for the security or privacy option. Then, find the unknown sources option and toggle it on. You may see a warning message, but don't worry, it is safe to proceed.

                                                                                                                                              -

                                                                                                                                              Install the APK file and launch the game

                                                                                                                                              -

                                                                                                                                              The third and final step is to install the APK file and launch the game. To do this, locate the downloaded APK file on your device and tap on it. You may see a pop-up asking for permissions, just tap on install and wait for the process to finish. Once the installation is done, you can open the game and start playing.

                                                                                                                                              -

                                                                                                                                              Tips and tricks for playing Hoard Master APK

                                                                                                                                              -

                                                                                                                                              Hoard Master APK is a game that is easy to play but hard to master. If you want to become the best hoard master in the city, you will need some tips and tricks to help you out. Here are some of them:

                                                                                                                                              -

                                                                                                                                              Focus on the big items first

                                                                                                                                              -

                                                                                                                                              One tip for playing Hoard Master APK is to focus on the big items first. The big items will give you more money and fill up your capacity faster. They will also make your hole bigger, which will allow you to eat more items later. Try to look for buildings, monuments, and mountains that you can swallow in one go.

                                                                                                                                              -

                                                                                                                                              Avoid obstacles and enemies

                                                                                                                                              -

                                                                                                                                              Another tip for playing Hoard Master APK is to avoid obstacles and enemies that can harm you or slow you down. Some of these include bombs, spikes, lasers, police cars, helicopters, and other holes. If you touch them, you will lose some of your items or money. You will also lose some of your time or health. Try to steer clear of them or eat them before they get close to you.

                                                                                                                                              -

                                                                                                                                              Use boosters and power-ups wisely

                                                                                                                                              -

                                                                                                                                              A third tip for playing Hoard Master APK is to use boosters and power-ups wisely. These are special items that can give you an edge in the game. Some of these include magnets, rockets, shields, multipliers, and more. They can help you attract more items, move faster, protect yourself, or increase your earnings. However, they are limited in number and duration, so use them only when necessary or when they can make a big difference.

                                                                                                                                              -

                                                                                                                                              Conclusion

                                                                                                                                              -

                                                                                                                                              Hoard Master APK is a fun and satisfying hole idle game that will keep you entertained for hours. You can swallow everything in the city, upgrade your hole and make money, and enjoy satisfying graphics and sound effects. You can also download and install Hoard Master APK easily on your Android device by following our guide above. And you can improve your skills and performance by using our tips and tricks above. If you are looking for a game that will make you feel powerful and rich, then Hoard Master APK is the game for you.

                                                                                                                                              -

                                                                                                                                              FAQs

                                                                                                                                              -

                                                                                                                                              Here are some frequently asked questions about Hoard Master APK:

                                                                                                                                              -

                                                                                                                                              Q: Is Hoard Master APK free to play?

                                                                                                                                              -

                                                                                                                                              A: Yes, Hoard Master APK is free to play. You can download and install it without paying anything. However, the game may contain ads and in-app purchases that can enhance your gaming experience.

                                                                                                                                              -

                                                                                                                                              Q: Is Hoard Master APK safe to play?

                                                                                                                                              -

                                                                                                                                              A: Yes, Hoard Master APK is safe to play. The game does not contain any viruses or malware that can harm your device or data. However, you should always download it from a trusted source like the link we provided above.

                                                                                                                                              -

                                                                                                                                              Q: Is Hoard Master APK offline or online?

                                                                                                                                              -

                                                                                                                                              A: Hoard Master APK is an offline game that does not require an internet connection to play. You can play it anytime and anywhere without worrying about data usage or connectivity issues.

                                                                                                                                              -

                                                                                                                                              Q: How many levels are there in Hoard Master APK?

                                                                                                                                              -

                                                                                                                                              A: Hoard Master APK has many levels that you can play and enjoy. The game is constantly updated with new levels and challenges that will test your skills and creativity. You can also replay the levels that you have completed to improve your score and rank.

                                                                                                                                              -

                                                                                                                                              Q: Can I play Hoard Master APK on PC?

                                                                                                                                              -

                                                                                                                                              A: Hoard Master APK is designed for Android devices, but you can also play it on PC using an Android emulator. An Android emulator is a software that allows you to run Android apps on your computer. Some of the popular Android emulators are BlueStacks, NoxPlayer, and LDPlayer. You can download and install any of them on your PC and then use them to run Hoard Master APK.

                                                                                                                                              197e85843d
                                                                                                                                              -
                                                                                                                                              -
                                                                                                                                              \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Ti game ONE PIECE Bounty Rush APK cho Android - Tri nghim th gii cp bin hp dn.md b/spaces/congsaPfin/Manga-OCR/logs/Ti game ONE PIECE Bounty Rush APK cho Android - Tri nghim th gii cp bin hp dn.md deleted file mode 100644 index a28545d17a848df7f89e49fe73e88360a815088d..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Ti game ONE PIECE Bounty Rush APK cho Android - Tri nghim th gii cp bin hp dn.md +++ /dev/null @@ -1,126 +0,0 @@ - -

                                                                                                                                              One Piece Bounty Rush: A Guide for Beginners

                                                                                                                                              -

                                                                                                                                              Are you a fan of One Piece, the epic manga and anime series about pirates, adventure, and friendship? Do you want to experience the thrill of fighting alongside Luffy, Zoro, Nami, and other iconic characters in real-time battles? If you answered yes, then you should definitely check out One Piece Bounty Rush, a 3D anime battle arena treasure looting game developed and published by Bandai Namco Entertainment.

                                                                                                                                              -

                                                                                                                                              One Piece Bounty Rush is a free-to-play mobile game that lets you join the quest to become the Pirate King in a world full of treasure, danger, and excitement. You can choose from over 70 characters from the One Piece universe, each with their own unique abilities and skills. You can also team up with other players from around the world in 4 vs 4 multiplayer matches, where you have to capture and hold as many treasure points as possible before time runs out.

                                                                                                                                              -

                                                                                                                                              tải one piece bounty rush apk


                                                                                                                                              Download ---> https://urlca.com/2uO5pB



                                                                                                                                              -

                                                                                                                                              If you are interested in playing this game, but don't know where to start, don't worry. In this article, we will give you a comprehensive guide on how to download and install the game on your Android device, how to play the game effectively, how to enhance your characters, and how to enjoy the game to the fullest. So without further ado, let's get started!

                                                                                                                                              -

                                                                                                                                              How to download and install One Piece Bounty Rush on your Android device

                                                                                                                                              -

                                                                                                                                              One Piece Bounty Rush is available for both Android and iOS devices. However, in this guide, we will focus on how to get it on your Android device. Here are the steps you need to follow:

                                                                                                                                              -
                                                                                                                                                -
                                                                                                                                              1. Go to Google Play Store on your device and search for One Piece Bounty Rush.
                                                                                                                                              2. -
                                                                                                                                              3. Tap on the game icon and then tap on Install. The game will start downloading and installing automatically.
                                                                                                                                              4. -
                                                                                                                                              5. Once the installation is complete, tap on Open to launch the game.
                                                                                                                                              6. -
                                                                                                                                              7. You will be asked to accept the terms of service and privacy policy of Bandai Namco Entertainment. Tap on Agree to proceed.
                                                                                                                                              8. -
                                                                                                                                              9. You will also be asked to allow some permissions for the game, such as access to your device's storage and location. Tap on Allow to grant them.
                                                                                                                                              10. -
                                                                                                                                              11. The game will then download some additional data. This may take some time depending on your internet connection speed.
                                                                                                                                              12. -
                                                                                                                                              13. After the data download is finished, you will be taken to the main menu of the game. Congratulations! You have successfully installed One Piece Bounty Rush on your Android device.
                                                                                                                                              14. -
                                                                                                                                              -

                                                                                                                                              How to play One Piece Bounty Rush: The basics of the gameplay, the character classes, the game modes, and the tips and tricks

                                                                                                                                              -

                                                                                                                                              Now that you have installed One Piece Bounty Rush on your device, it's time to learn how to play it. The game is not very complicated, but it does require some strategy and teamwork. Here are some basic aspects of the gameplay that you need to know:

                                                                                                                                              -

                                                                                                                                              The character classes

                                                                                                                                              -

                                                                                                                                              One Piece Bounty Rush has three character classes: Attacker, Defender, and Runner. Each class has its own strengths and weaknesses, as well as different roles in a team. Here is a brief overview of each class:

                                                                                                                                              -
                                                                                                                                                -
                                                                                                                                              • Attacker: This class specializes in dealing damage to enemies and capturing treasure points quickly. They have high attack power and speed, but low defense and health. They are good at fighting one-on-one or taking out weak enemies. However, they are vulnerable to being outnumbered or overwhelmed by stronger enemies. Examples of Attacker characters are Luffy, Zoro, Crocodile, Mih

                                                                                                                                                The game modes

                                                                                                                                                -

                                                                                                                                                One Piece Bounty Rush has two main game modes: League Battle and Challenge Battle. Here is a brief description of each mode:

                                                                                                                                                -
                                                                                                                                                  -
                                                                                                                                                • League Battle: This is the default and most popular mode of the game. In this mode, you can join a random team or create your own team with your friends to compete against other players in real-time. You can choose from four different leagues: East Blue, Grand Line, New World, and SS. Each league has its own ranking system and rewards. You can earn league points by winning matches and ranking up in your league. You can also earn bounty coins, character fragments, and medals by opening treasure chests after each match.
                                                                                                                                                • -
                                                                                                                                                • Challenge Battle: This is a special mode that is available for a limited time. In this mode, you can participate in various events that have different rules and objectives. For example, you may have to use only certain characters, or face stronger enemies, or collect more treasure points than usual. You can earn event points by completing matches and missions in this mode. You can also earn exclusive rewards, such as special characters, costumes, and titles by exchanging event points in the event shop.
                                                                                                                                                • -
                                                                                                                                                -

                                                                                                                                                The tips and tricks

                                                                                                                                                -

                                                                                                                                                Now that you know the basics of the gameplay, you may want to learn some tips and tricks to improve your performance and win more matches. Here are some of them:

                                                                                                                                                -
                                                                                                                                                  -
                                                                                                                                                • Choose your characters wisely: Before entering a match, you should check the map and the enemy team's composition. You should choose characters that suit the map's terrain and treasure locations, as well as counter the enemy's class and element. For example, if the map has many high places, you may want to use a Runner character that can jump or fly over them. If the enemy team has many Defenders, you may want to use an Attacker character that can break their guard or deal more damage to them.
                                                                                                                                                • -
                                                                                                                                                • Use your skills strategically: Each character has two skills that can be activated by tapping the icons on the lower-left corner of the screen. Skills have different effects and cooldown times, so you should use them wisely. For example, you may want to save your skills for when you need to capture a treasure, escape from an enemy, or finish off a low-health opponent. You may also want to coordinate your skills with your teammates to create combos or support each other.
                                                                                                                                                • -
                                                                                                                                                • Dodge and guard: Besides attacking, you can also dodge and guard to avoid or reduce damage from enemies. You can dodge by swiping the screen in any direction. Dodging consumes stamina, which is shown by the yellow bar below your health bar. You can guard by holding the attack button. Guarding reduces damage from normal attacks, but not from skills or critical hits. You can also break an enemy's guard by using a skill or a charged attack (by holding the attack button longer).
                                                                                                                                                • -
                                                                                                                                                • Collect treasure orbs: Treasure orbs are small blue spheres that appear randomly on the map. They are very useful for boosting your team's treasure gauge, which is shown by the blue bar on the top of the screen. The treasure gauge determines how fast you can capture a treasure point. The higher the treasure gauge, the faster you can capture it. You can collect treasure orbs by simply touching them or using a skill that attracts them.
                                                                                                                                                • -
                                                                                                                                                • Use medals and support: Medals and support are two ways to enhance your characters' stats and abilities. Medals are items that you can equip to your characters to give them various traits, such as increased attack power, defense power, speed, critical rate, etc. You can obtain medals by opening treasure chests or exchanging bounty coins in the shop. You can also upgrade or evolve medals to make them stronger. Support are other characters that you can assign to your main characters to give them additional bonuses, such as increased health, skill cooldown reduction, element advantage, etc. You can obtain support by collecting character fragments or exchanging bounty coins in the shop.
                                                                                                                                                • -
                                                                                                                                                -

                                                                                                                                                How to enhance your characters: The importance of character fragments, medals, skills, and support

                                                                                                                                                -

                                                                                                                                                As you play One Piece Bounty Rush, you may want to make your characters stronger and unlock their full potential. There are several ways to do that, such as collecting character fragments, equipping medals, upgrading skills, and assigning support. Here are some details on how to enhance your characters:

                                                                                                                                                -

                                                                                                                                                The character fragments

                                                                                                                                                -

                                                                                                                                                Character fragments are items that you need to increase your character's grade level and star level. Grade level determines how high your character's stats can go, while star level determines how many traits your character can have from medals.

                                                                                                                                                -

                                                                                                                                                tải game one piece bounty rush cho android
                                                                                                                                                -one piece bounty rush apk mod tiền
                                                                                                                                                -cách tải one piece bounty rush trên pc
                                                                                                                                                -one piece bounty rush apk phiên bản mới nhất
                                                                                                                                                -hướng dẫn tải one piece bounty rush
                                                                                                                                                -one piece bounty rush apk việt hóa
                                                                                                                                                -tải one piece bounty rush ios
                                                                                                                                                -one piece bounty rush apk hack
                                                                                                                                                -link tải one piece bounty rush
                                                                                                                                                -one piece bounty rush apk obb
                                                                                                                                                -tải game one piece bounty rush miễn phí
                                                                                                                                                -one piece bounty rush apk download
                                                                                                                                                -cách chơi one piece bounty rush
                                                                                                                                                -one piece bounty rush apk data
                                                                                                                                                -tải game one piece bounty rush online
                                                                                                                                                -one piece bounty rush apk full
                                                                                                                                                -nhận code one piece bounty rush
                                                                                                                                                -one piece bounty rush apk offline
                                                                                                                                                -tải game one piece bounty rush 3d
                                                                                                                                                -one piece bounty rush apk 2023
                                                                                                                                                -cách nâng cấp nhân vật trong one piece bounty rush
                                                                                                                                                -one piece bounty rush apk mới nhất
                                                                                                                                                -tải game one piece bounty rush về máy
                                                                                                                                                -one piece bounty rush apk free
                                                                                                                                                -cách đăng ký tài khoản one piece bounty rush
                                                                                                                                                -one piece bounty rush apk update
                                                                                                                                                -tải game one piece bounty rush phiên bản mới
                                                                                                                                                -one piece bounty rush apk no root
                                                                                                                                                -cách lấy huy chương trong one piece bounty rush
                                                                                                                                                -one piece bounty rush apk revdl
                                                                                                                                                -tải game one piece bounty rush cho pc
                                                                                                                                                -one piece bounty rush apk pure
                                                                                                                                                -cách đổi tên trong one piece bounty rush
                                                                                                                                                -one piece bounty rush apk latest version
                                                                                                                                                -tải game one piece bounty rush hack
                                                                                                                                                -one piece bounty rush apk english version
                                                                                                                                                -cách kiếm tiền trong one piece bounty rush
                                                                                                                                                -one piece bounty rush apk unlimited money
                                                                                                                                                -tải game one piece bounty rush 4v4 pvp
                                                                                                                                                -one piece bounty rush apk android 1
                                                                                                                                                -cách lên đội hình trong one piece bounty rush
                                                                                                                                                -one piece bounty rush apk andropalace
                                                                                                                                                -cách tham gia liên minh trong one piece bounty rush
                                                                                                                                                -one piece bounty rush apk bandai namco entertainment inc.
                                                                                                                                                -tải game one piece bounty rush mien phi cho dien thoai android

                                                                                                                                                -

                                                                                                                                                You can obtain character fragments by opening treasure chests after each match, or exchanging bounty coins in the shop. You can also get character fragments by participating in events or completing missions. You need a certain number of character fragments to increase your character's grade level or star level. For example, you need 10 character fragments to increase your character's grade level from A to S, or 50 character fragments to increase your character's star level from 4 to 5.

                                                                                                                                                -

                                                                                                                                                The medals

                                                                                                                                                -

                                                                                                                                                Medals are items that you can equip to your characters to give them various traits, such as increased attack power, defense power, speed, critical rate, etc. Medals have different rarities, from 1 star to 5 stars. The higher the rarity, the more traits and slots a medal has. You can equip up to three medals to each character, and each medal can have up to three traits.

                                                                                                                                                -

                                                                                                                                                You can obtain medals by opening treasure chests or exchanging bounty coins in the shop. You can also upgrade or evolve medals to make them stronger. To upgrade a medal, you need to use other medals as materials. To evolve a medal, you need to use evolution materials that you can get from the medal exchange shop or events. Upgrading or evolving a medal will increase its stats and unlock new traits or slots.

                                                                                                                                                -

                                                                                                                                                The skills

                                                                                                                                                -

                                                                                                                                                Skills are special abilities that your characters can use in battle. Each character has two skills that can be activated by tapping the icons on the lower-left corner of the screen. Skills have different effects and cooldown times, so you should use them wisely.

                                                                                                                                                -

                                                                                                                                                You can upgrade your skills by using skill orbs that you can get from treasure chests, events, or missions. Upgrading your skills will increase their damage, duration, range, or other effects. You can also unlock new skills for some characters by increasing their star level.

                                                                                                                                                -

                                                                                                                                                The support

                                                                                                                                                -

                                                                                                                                                Support are other characters that you can assign to your main characters to give them additional bonuses, such as increased health, skill cooldown reduction, element advantage, etc. You can assign up to three support characters to each main character, and each support character can give up to three bonuses.

                                                                                                                                                -

                                                                                                                                                You can obtain support by collecting character fragments or exchanging bounty coins in the shop. You can also enhance your support by using support orbs that you can get from treasure chests, events, or missions. Enhancing your support will increase their bonus percentage and unlock new bonuses.

                                                                                                                                                -

                                                                                                                                                How to enjoy One Piece Bounty Rush: The features of the game that make it fun and engaging, such as the graphics, the sound, the story, and the events

                                                                                                                                                -

                                                                                                                                                One Piece Bounty Rush is not only a game that tests your skills and strategy, but also a game that lets you immerse yourself in the world of One Piece. The game has many features that make it fun and engaging for fans of the manga and anime series. Here are some of them:

                                                                                                                                                -

                                                                                                                                                The graphics

                                                                                                                                                -

                                                                                                                                                One Piece Bounty Rush has stunning 3D graphics that bring the characters and environments of One Piece to life. You can see the details of each character's appearance and expression, as well as the effects of their skills and attacks. You can also explore different maps that are based on locations from the One Piece story, such as Alabasta, Dressrosa, Whole Cake Island, and more. The game also has dynamic weather and lighting effects that change according to the time of day and the season.

                                                                                                                                                -

                                                                                                                                                The sound

                                                                                                                                                -

                                                                                                                                                One Piece Bounty Rush has amazing sound effects and music that enhance the atmosphere and excitement of the game. You can hear the voices of each character as they speak their signature lines or taunt their enemies. You can also hear the sounds of their skills and attacks as they clash with each other. The game also has original music tracks that match the mood and theme of each map and mode.

                                                                                                                                                -

                                                                                                                                                The story

                                                                                                                                                -

                                                                                                                                                One Piece Bounty Rush has a story mode that follows the adventures of Luffy and his crew as they travel across the Grand Line in search of the One Piece treasure. You can relive some of the most memorable scenes and battles from the manga and anime series, as well as interact with various characters that you meet along the way. The story mode also has original scenarios that are exclusive to the game.

                                                                                                                                                -

                                                                                                                                                The events

                                                                                                                                                -

                                                                                                                                                One Piece Bounty Rush has regular events that offer new challenges and rewards for players. Some events are based on special occasions or seasons, such as Halloween, Christmas, New Year, etc. Some events are based on specific arcs or characters from the One Piece story, such as Marineford War, Wano Country, Big Mom Pirates, etc. Some events are based on collaborations with other games or media franchises, such as Dragon Ball Z, Naruto Shippuden, My Hero Academia , etc. Events usually have their own game modes, missions, rewards, and rankings. You can earn event points by completing matches and missions in the event mode. You can also earn exclusive rewards, such as special characters, costumes, and titles by exchanging event points in the event shop.

                                                                                                                                                -

                                                                                                                                                Conclusion: A summary of the main points and a call to action for the readers

                                                                                                                                                -

                                                                                                                                                One Piece Bounty Rush is a game that every One Piece fan should try. It is a game that lets you join the adventure of Luffy and his crew in a 3D anime battle arena treasure looting game. You can choose from over 70 characters from the One Piece universe, each with their own unique abilities and skills. You can also team up with other players from around the world in 4 vs 4 multiplayer matches, where you have to capture and hold as many treasure points as possible before time runs out.

                                                                                                                                                -

                                                                                                                                                In this article, we have given you a comprehensive guide on how to download and install the game on your Android device, how to play the game effectively, how to enhance your characters, and how to enjoy the game to the fullest. We hope that this guide has helped you to get started with One Piece Bounty Rush and to have fun with it.

                                                                                                                                                -

                                                                                                                                                If you are ready to join the quest to become the Pirate King, then download One Piece Bounty Rush today and start your adventure. You can also visit the official website of the game for more information and updates. And don't forget to share your feedback and opinions with us in the comments section below. We would love to hear from you!

                                                                                                                                                -

                                                                                                                                                FAQs: Some common questions and answers about One Piece Bounty Rush

                                                                                                                                                -

                                                                                                                                                Here are some frequently asked questions and answers about One Piece Bounty Rush that you may find useful:

                                                                                                                                                -
                                                                                                                                                  -
                                                                                                                                                1. Q: How can I get more bounty coins?
                                                                                                                                                2. -
                                                                                                                                                3. A: Bounty coins are the main currency of One Piece Bounty Rush. You can use them to buy character fragments, medals, support, and other items in the shop. You can get more bounty coins by opening treasure chests after each match, completing daily missions, ranking up in your league, participating in events, or watching ads.
                                                                                                                                                4. -
                                                                                                                                                5. Q: How can I get more gems?
                                                                                                                                                6. -
                                                                                                                                                7. A: Gems are the premium currency of One Piece Bounty Rush. You can use them to buy special items or characters in the shop, or to summon new characters in the scout menu. You can get more gems by completing achievements, logging in daily, participating in events, or buying them with real money.
                                                                                                                                                8. -
                                                                                                                                                9. Q: How can I get more characters?
                                                                                                                                                10. -
                                                                                                                                                11. A: There are two ways to get more characters in One Piece Bounty Rush: by collecting character fragments or by summoning them in the scout menu. You can collect character fragments by opening treasure chests, exchanging bounty coins, or participating in events. You need a certain number of character fragments to unlock a new character or increase their star level. You can summon new characters by using gems or scout tickets in the scout menu. You can get scout tickets by completing missions or participating in events.
                                                                                                                                                12. -
                                                                                                                                                13. Q: How can I change my character's costume?
                                                                                                                                                14. -
                                                                                                                                                15. A: Some characters have alternative costumes that you can use to change their appearance. You can get costumes by exchanging event points or gems in the shop or event shop. To change your character's costume, go to the character menu and tap on the costume icon on the lower-right corner of the screen. Then select the costume that you want to use and tap on Confirm.
                                                                                                                                                16. -
                                                                                                                                                17. Q: How can I contact the customer support?
                                                                                                                                                18. -
                                                                                                                                                19. A: If you have any problems or issues with One Piece Bounty Rush, you can contact the customer support by going to the settings menu and tapping on Support. Then tap on Contact Us and fill out the form with your details and inquiry. You can also check out the FAQ section for more information and solutions.
                                                                                                                                                20. -

                                                                                                                                                401be4b1e0
                                                                                                                                                -
                                                                                                                                                -
                                                                                                                                                \ No newline at end of file diff --git a/spaces/cooelf/Multimodal-CoT/timm/loss/asymmetric_loss.py b/spaces/cooelf/Multimodal-CoT/timm/loss/asymmetric_loss.py deleted file mode 100644 index a8b10f9c797c2cb3b2652302717b592dada216f3..0000000000000000000000000000000000000000 --- a/spaces/cooelf/Multimodal-CoT/timm/loss/asymmetric_loss.py +++ /dev/null @@ -1,97 +0,0 @@ -import torch -import torch.nn as nn - - -class AsymmetricLossMultiLabel(nn.Module): - def __init__(self, gamma_neg=4, gamma_pos=1, clip=0.05, eps=1e-8, disable_torch_grad_focal_loss=False): - super(AsymmetricLossMultiLabel, self).__init__() - - self.gamma_neg = gamma_neg - self.gamma_pos = gamma_pos - self.clip = clip - self.disable_torch_grad_focal_loss = disable_torch_grad_focal_loss - self.eps = eps - - def forward(self, x, y): - """" - Parameters - ---------- - x: input logits - y: targets (multi-label binarized vector) - """ - - # Calculating Probabilities - x_sigmoid = torch.sigmoid(x) - xs_pos = x_sigmoid - xs_neg = 1 - x_sigmoid - - # Asymmetric Clipping - if self.clip is not None and self.clip > 0: - xs_neg = (xs_neg + self.clip).clamp(max=1) - - # Basic CE calculation - los_pos = y * torch.log(xs_pos.clamp(min=self.eps)) - los_neg = (1 - y) * torch.log(xs_neg.clamp(min=self.eps)) - loss = los_pos + los_neg - - # Asymmetric Focusing - if self.gamma_neg > 0 or self.gamma_pos > 0: - if self.disable_torch_grad_focal_loss: - torch._C.set_grad_enabled(False) - pt0 = xs_pos * y - pt1 = xs_neg * (1 - y) # pt = p if t > 0 else 1-p - pt = pt0 + pt1 - one_sided_gamma = self.gamma_pos * y + self.gamma_neg * (1 - y) - one_sided_w = torch.pow(1 - pt, one_sided_gamma) - if self.disable_torch_grad_focal_loss: - torch._C.set_grad_enabled(True) - loss *= one_sided_w - - return -loss.sum() - - -class AsymmetricLossSingleLabel(nn.Module): - def __init__(self, gamma_pos=1, gamma_neg=4, eps: float = 0.1, reduction='mean'): - super(AsymmetricLossSingleLabel, self).__init__() - - self.eps = eps - self.logsoftmax = nn.LogSoftmax(dim=-1) - self.targets_classes = [] # prevent gpu repeated memory allocation - self.gamma_pos = gamma_pos - self.gamma_neg = gamma_neg - self.reduction = reduction - - def forward(self, inputs, target, reduction=None): - """" - Parameters - ---------- - x: input logits - y: targets (1-hot vector) - """ - - num_classes = inputs.size()[-1] - log_preds = self.logsoftmax(inputs) - self.targets_classes = torch.zeros_like(inputs).scatter_(1, target.long().unsqueeze(1), 1) - - # ASL weights - targets = self.targets_classes - anti_targets = 1 - targets - xs_pos = torch.exp(log_preds) - xs_neg = 1 - xs_pos - xs_pos = xs_pos * targets - xs_neg = xs_neg * anti_targets - asymmetric_w = torch.pow(1 - xs_pos - xs_neg, - self.gamma_pos * targets + self.gamma_neg * anti_targets) - log_preds = log_preds * asymmetric_w - - if self.eps > 0: # label smoothing - self.targets_classes.mul_(1 - self.eps).add_(self.eps / num_classes) - - # loss calculation - loss = - self.targets_classes.mul(log_preds) - - loss = loss.sum(dim=-1) - if self.reduction == 'mean': - loss = loss.mean() - - return loss diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/leres/leres/net_tools.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/leres/leres/net_tools.py deleted file mode 100644 index 745ba5a0ef19adb869525e6b252db86780b8126e..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/leres/leres/net_tools.py +++ /dev/null @@ -1,54 +0,0 @@ -import importlib -import torch -import os -from collections import OrderedDict - - -def get_func(func_name): - """Helper to return a function object by name. func_name must identify a - function in this module or the path to a function relative to the base - 'modeling' module. - """ - if func_name == '': - return None - try: - parts = func_name.split('.') - # Refers to a function in this module - if len(parts) == 1: - return globals()[parts[0]] - # Otherwise, assume we're referencing a module under modeling - module_name = 'annotator.leres.leres.' + '.'.join(parts[:-1]) - module = importlib.import_module(module_name) - return getattr(module, parts[-1]) - except Exception: - print('Failed to f1ind function: %s', func_name) - raise - -def load_ckpt(args, depth_model, shift_model, focal_model): - """ - Load checkpoint. - """ - if os.path.isfile(args.load_ckpt): - print("loading checkpoint %s" % args.load_ckpt) - checkpoint = torch.load(args.load_ckpt) - if shift_model is not None: - shift_model.load_state_dict(strip_prefix_if_present(checkpoint['shift_model'], 'module.'), - strict=True) - if focal_model is not None: - focal_model.load_state_dict(strip_prefix_if_present(checkpoint['focal_model'], 'module.'), - strict=True) - depth_model.load_state_dict(strip_prefix_if_present(checkpoint['depth_model'], "module."), - strict=True) - del checkpoint - if torch.cuda.is_available(): - torch.cuda.empty_cache() - - -def strip_prefix_if_present(state_dict, prefix): - keys = sorted(state_dict.keys()) - if not all(key.startswith(prefix) for key in keys): - return state_dict - stripped_state_dict = OrderedDict() - for key, value in state_dict.items(): - stripped_state_dict[key.replace(prefix, "")] = value - return stripped_state_dict \ No newline at end of file diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/oneformer/oneformer/data/dataset_mappers/coco_unified_new_baseline_dataset_mapper.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/oneformer/oneformer/data/dataset_mappers/coco_unified_new_baseline_dataset_mapper.py deleted file mode 100644 index 25a460bf73e0417916d2e09e2edc1f975155024c..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/oneformer/oneformer/data/dataset_mappers/coco_unified_new_baseline_dataset_mapper.py +++ /dev/null @@ -1,341 +0,0 @@ -# ------------------------------------------------------------------------------ -# Reference: https://github.com/facebookresearch/Mask2Former/blob/main/mask2former/data/dataset_mappers/coco_panoptic_new_baseline_dataset_mapper.py -# Modified by Jitesh Jain (https://github.com/praeclarumjj3) -# ------------------------------------------------------------------------------ - -import copy -import logging - -import numpy as np -import torch - -from annotator.oneformer.detectron2.data import MetadataCatalog -from annotator.oneformer.detectron2.config import configurable -from annotator.oneformer.detectron2.data import detection_utils as utils -from annotator.oneformer.detectron2.data import transforms as T -from annotator.oneformer.detectron2.structures import BitMasks, Instances -from annotator.oneformer.oneformer.utils.box_ops import masks_to_boxes -from annotator.oneformer.oneformer.data.tokenizer import SimpleTokenizer, Tokenize - -__all__ = ["COCOUnifiedNewBaselineDatasetMapper"] - - -def build_transform_gen(cfg, is_train): - """ - Create a list of default :class:`Augmentation` from config. - Now it includes resizing and flipping. - Returns: - list[Augmentation] - """ - assert is_train, "Only support training augmentation" - image_size = cfg.INPUT.IMAGE_SIZE - min_scale = cfg.INPUT.MIN_SCALE - max_scale = cfg.INPUT.MAX_SCALE - - augmentation = [] - - if cfg.INPUT.RANDOM_FLIP != "none": - augmentation.append( - T.RandomFlip( - horizontal=cfg.INPUT.RANDOM_FLIP == "horizontal", - vertical=cfg.INPUT.RANDOM_FLIP == "vertical", - ) - ) - - augmentation.extend([ - T.ResizeScale( - min_scale=min_scale, max_scale=max_scale, target_height=image_size, target_width=image_size - ), - T.FixedSizeCrop(crop_size=(image_size, image_size)), - ]) - - return augmentation - - -# This is specifically designed for the COCO dataset. -class COCOUnifiedNewBaselineDatasetMapper: - """ - A callable which takes a dataset dict in Detectron2 Dataset format, - and map it into a format used by OneFormer. - - This dataset mapper applies the same transformation as DETR for COCO panoptic segmentation. - - The callable currently does the following: - - 1. Read the image from "file_name" - 2. Applies geometric transforms to the image and annotation - 3. Find and applies suitable cropping to the image and annotation - 4. Prepare image and annotation to Tensors - """ - - @configurable - def __init__( - self, - is_train=True, - *, - num_queries, - tfm_gens, - meta, - image_format, - max_seq_len, - task_seq_len, - semantic_prob, - instance_prob, - ): - """ - NOTE: this interface is experimental. - Args: - is_train: for training or inference - augmentations: a list of augmentations or deterministic transforms to apply - crop_gen: crop augmentation - tfm_gens: data augmentation - image_format: an image format supported by :func:`detection_utils.read_image`. - """ - self.tfm_gens = tfm_gens - logging.getLogger(__name__).info( - "[COCOUnifiedNewBaselineDatasetMapper] Full TransformGens used in training: {}".format( - str(self.tfm_gens) - ) - ) - - self.img_format = image_format - self.is_train = is_train - self.meta = meta - self.ignore_label = self.meta.ignore_label - self.num_queries = num_queries - - self.things = [] - for k,v in self.meta.thing_dataset_id_to_contiguous_id.items(): - self.things.append(v) - self.class_names = self.meta.stuff_classes - self.text_tokenizer = Tokenize(SimpleTokenizer(), max_seq_len=max_seq_len) - self.task_tokenizer = Tokenize(SimpleTokenizer(), max_seq_len=task_seq_len) - self.semantic_prob = semantic_prob - self.instance_prob = instance_prob - - @classmethod - def from_config(cls, cfg, is_train=True): - # Build augmentation - tfm_gens = build_transform_gen(cfg, is_train) - dataset_names = cfg.DATASETS.TRAIN - meta = MetadataCatalog.get(dataset_names[0]) - - ret = { - "is_train": is_train, - "meta": meta, - "tfm_gens": tfm_gens, - "image_format": cfg.INPUT.FORMAT, - "num_queries": cfg.MODEL.ONE_FORMER.NUM_OBJECT_QUERIES - cfg.MODEL.TEXT_ENCODER.N_CTX, - "task_seq_len": cfg.INPUT.TASK_SEQ_LEN, - "max_seq_len": cfg.INPUT.MAX_SEQ_LEN, - "semantic_prob": cfg.INPUT.TASK_PROB.SEMANTIC, - "instance_prob": cfg.INPUT.TASK_PROB.INSTANCE, - } - return ret - - def _get_semantic_dict(self, pan_seg_gt, image_shape, segments_info, num_class_obj): - instances = Instances(image_shape) - - classes = [] - texts = ["a semantic photo"] * self.num_queries - masks = [] - label = np.ones_like(pan_seg_gt) * self.ignore_label - - for segment_info in segments_info: - class_id = segment_info["category_id"] - if not segment_info["iscrowd"]: - mask = pan_seg_gt == segment_info["id"] - if not np.all(mask == False): - if class_id not in classes: - cls_name = self.class_names[class_id] - classes.append(class_id) - masks.append(mask) - num_class_obj[cls_name] += 1 - else: - idx = classes.index(class_id) - masks[idx] += mask - masks[idx] = np.clip(masks[idx], 0, 1).astype(np.bool) - label[mask] = class_id - - num = 0 - for i, cls_name in enumerate(self.class_names): - if num_class_obj[cls_name] > 0: - for _ in range(num_class_obj[cls_name]): - if num >= len(texts): - break - texts[num] = f"a photo with a {cls_name}" - num += 1 - - classes = np.array(classes) - instances.gt_classes = torch.tensor(classes, dtype=torch.int64) - if len(masks) == 0: - # Some image does not have annotation (all ignored) - instances.gt_masks = torch.zeros((0, pan_seg_gt.shape[-2], pan_seg_gt.shape[-1])) - instances.gt_bboxes = torch.zeros((0, 4)) - else: - masks = BitMasks( - torch.stack([torch.from_numpy(np.ascontiguousarray(x.copy())) for x in masks]) - ) - instances.gt_masks = masks.tensor - # Placeholder bounding boxes for stuff regions. Note that these are not used during training. - instances.gt_bboxes = torch.stack([torch.tensor([0., 0., 1., 1.])] * instances.gt_masks.shape[0]) - return instances, texts, label - - def _get_instance_dict(self, pan_seg_gt, image_shape, segments_info, num_class_obj): - instances = Instances(image_shape) - - classes = [] - texts = ["an instance photo"] * self.num_queries - masks = [] - label = np.ones_like(pan_seg_gt) * self.ignore_label - - for segment_info in segments_info: - class_id = segment_info["category_id"] - if class_id in self.things: - if not segment_info["iscrowd"]: - mask = pan_seg_gt == segment_info["id"] - if not np.all(mask == False): - cls_name = self.class_names[class_id] - classes.append(class_id) - masks.append(mask) - num_class_obj[cls_name] += 1 - label[mask] = class_id - - num = 0 - for i, cls_name in enumerate(self.class_names): - if num_class_obj[cls_name] > 0: - for _ in range(num_class_obj[cls_name]): - if num >= len(texts): - break - texts[num] = f"a photo with a {cls_name}" - num += 1 - - classes = np.array(classes) - instances.gt_classes = torch.tensor(classes, dtype=torch.int64) - if len(masks) == 0: - # Some image does not have annotation (all ignored) - instances.gt_masks = torch.zeros((0, pan_seg_gt.shape[-2], pan_seg_gt.shape[-1])) - instances.gt_bboxes = torch.zeros((0, 4)) - else: - masks = BitMasks( - torch.stack([torch.from_numpy(np.ascontiguousarray(x.copy())) for x in masks]) - ) - instances.gt_masks = masks.tensor - instances.gt_bboxes = masks_to_boxes(instances.gt_masks) - return instances, texts, label - - def _get_panoptic_dict(self, pan_seg_gt, image_shape, segments_info, num_class_obj): - instances = Instances(image_shape) - - classes = [] - texts = ["a panoptic photo"] * self.num_queries - masks = [] - label = np.ones_like(pan_seg_gt) * self.ignore_label - - for segment_info in segments_info: - class_id = segment_info["category_id"] - if not segment_info["iscrowd"]: - mask = pan_seg_gt == segment_info["id"] - if not np.all(mask == False): - cls_name = self.class_names[class_id] - classes.append(class_id) - masks.append(mask) - num_class_obj[cls_name] += 1 - label[mask] = class_id - - num = 0 - for i, cls_name in enumerate(self.class_names): - if num_class_obj[cls_name] > 0: - for _ in range(num_class_obj[cls_name]): - if num >= len(texts): - break - texts[num] = f"a photo with a {cls_name}" - num += 1 - - classes = np.array(classes) - instances.gt_classes = torch.tensor(classes, dtype=torch.int64) - if len(masks) == 0: - # Some image does not have annotation (all ignored) - instances.gt_masks = torch.zeros((0, pan_seg_gt.shape[-2], pan_seg_gt.shape[-1])) - instances.gt_bboxes = torch.zeros((0, 4)) - else: - masks = BitMasks( - torch.stack([torch.from_numpy(np.ascontiguousarray(x.copy())) for x in masks]) - ) - instances.gt_masks = masks.tensor - instances.gt_bboxes = masks_to_boxes(instances.gt_masks) - for i in range(instances.gt_classes.shape[0]): - # Placeholder bounding boxes for stuff regions. Note that these are not used during training. - if instances.gt_classes[i].item() not in self.things: - instances.gt_bboxes[i] = torch.tensor([0., 0., 1., 1.]) - return instances, texts, label - - def __call__(self, dataset_dict): - """ - Args: - dataset_dict (dict): Metadata of one image, in Detectron2 Dataset format. - - Returns: - dict: a format that builtin models in detectron2 accept - """ - dataset_dict = copy.deepcopy(dataset_dict) # it will be modified by code below - image = utils.read_image(dataset_dict["file_name"], format=self.img_format) - utils.check_image_size(dataset_dict, image) - - image, transforms = T.apply_transform_gens(self.tfm_gens, image) - image_shape = image.shape[:2] # h, w - - # Pytorch's dataloader is efficient on torch.Tensor due to shared-memory, - # but not efficient on large generic data structures due to the use of pickle & mp.Queue. - # Therefore it's important to use torch.Tensor. - dataset_dict["image"] = torch.as_tensor(np.ascontiguousarray(image.transpose(2, 0, 1))) - - if not self.is_train: - # USER: Modify this if you want to keep them for some reason. - dataset_dict.pop("annotations", None) - return dataset_dict - - # semantic segmentation - if "sem_seg_file_name" in dataset_dict: - # PyTorch transformation not implemented for uint16, so converting it to double first - sem_seg_gt = utils.read_image(dataset_dict.pop("sem_seg_file_name")).astype("double") - sem_seg_gt = transforms.apply_segmentation(sem_seg_gt) - else: - sem_seg_gt = None - - if "pan_seg_file_name" in dataset_dict: - pan_seg_gt = utils.read_image(dataset_dict.pop("pan_seg_file_name"), "RGB") - segments_info = dataset_dict["segments_info"] - - # apply the same transformation to panoptic segmentation - pan_seg_gt = transforms.apply_segmentation(pan_seg_gt) - - from panopticapi.utils import rgb2id - pan_seg_gt = rgb2id(pan_seg_gt) - - prob_task = np.random.uniform(0,1.) - - num_class_obj = {} - - for name in self.class_names: - num_class_obj[name] = 0 - - if prob_task < self.semantic_prob: - task = "The task is semantic" - instances, text, sem_seg = self._get_semantic_dict(pan_seg_gt, image_shape, segments_info, num_class_obj) - elif prob_task < self.instance_prob: - task = "The task is instance" - instances, text, sem_seg = self._get_instance_dict(pan_seg_gt, image_shape, segments_info, num_class_obj) - else: - task = "The task is panoptic" - instances, text, sem_seg = self._get_panoptic_dict(pan_seg_gt, image_shape, segments_info, num_class_obj) - - - dataset_dict["sem_seg"] = torch.from_numpy(sem_seg).long() - dataset_dict["instances"] = instances - dataset_dict["orig_shape"] = image_shape - dataset_dict["task"] = task - dataset_dict["text"] = text - dataset_dict["thing_ids"] = self.things - - return dataset_dict diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/mmseg/models/segmentors/base.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/mmseg/models/segmentors/base.py deleted file mode 100644 index 172fc63b736c4f13be1cd909433bc260760a1eaa..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/mmseg/models/segmentors/base.py +++ /dev/null @@ -1,273 +0,0 @@ -import logging -import warnings -from abc import ABCMeta, abstractmethod -from collections import OrderedDict - -import annotator.uniformer.mmcv as mmcv -import numpy as np -import torch -import torch.distributed as dist -import torch.nn as nn -from annotator.uniformer.mmcv.runner import auto_fp16 - - -class BaseSegmentor(nn.Module): - """Base class for segmentors.""" - - __metaclass__ = ABCMeta - - def __init__(self): - super(BaseSegmentor, self).__init__() - self.fp16_enabled = False - - @property - def with_neck(self): - """bool: whether the segmentor has neck""" - return hasattr(self, 'neck') and self.neck is not None - - @property - def with_auxiliary_head(self): - """bool: whether the segmentor has auxiliary head""" - return hasattr(self, - 'auxiliary_head') and self.auxiliary_head is not None - - @property - def with_decode_head(self): - """bool: whether the segmentor has decode head""" - return hasattr(self, 'decode_head') and self.decode_head is not None - - @abstractmethod - def extract_feat(self, imgs): - """Placeholder for extract features from images.""" - pass - - @abstractmethod - def encode_decode(self, img, img_metas): - """Placeholder for encode images with backbone and decode into a - semantic segmentation map of the same size as input.""" - pass - - @abstractmethod - def forward_train(self, imgs, img_metas, **kwargs): - """Placeholder for Forward function for training.""" - pass - - @abstractmethod - def simple_test(self, img, img_meta, **kwargs): - """Placeholder for single image test.""" - pass - - @abstractmethod - def aug_test(self, imgs, img_metas, **kwargs): - """Placeholder for augmentation test.""" - pass - - def init_weights(self, pretrained=None): - """Initialize the weights in segmentor. - - Args: - pretrained (str, optional): Path to pre-trained weights. - Defaults to None. - """ - if pretrained is not None: - logger = logging.getLogger() - logger.info(f'load model from: {pretrained}') - - def forward_test(self, imgs, img_metas, **kwargs): - """ - Args: - imgs (List[Tensor]): the outer list indicates test-time - augmentations and inner Tensor should have a shape NxCxHxW, - which contains all images in the batch. - img_metas (List[List[dict]]): the outer list indicates test-time - augs (multiscale, flip, etc.) and the inner list indicates - images in a batch. - """ - for var, name in [(imgs, 'imgs'), (img_metas, 'img_metas')]: - if not isinstance(var, list): - raise TypeError(f'{name} must be a list, but got ' - f'{type(var)}') - - num_augs = len(imgs) - if num_augs != len(img_metas): - raise ValueError(f'num of augmentations ({len(imgs)}) != ' - f'num of image meta ({len(img_metas)})') - # all images in the same aug batch all of the same ori_shape and pad - # shape - for img_meta in img_metas: - ori_shapes = [_['ori_shape'] for _ in img_meta] - assert all(shape == ori_shapes[0] for shape in ori_shapes) - img_shapes = [_['img_shape'] for _ in img_meta] - assert all(shape == img_shapes[0] for shape in img_shapes) - pad_shapes = [_['pad_shape'] for _ in img_meta] - assert all(shape == pad_shapes[0] for shape in pad_shapes) - - if num_augs == 1: - return self.simple_test(imgs[0], img_metas[0], **kwargs) - else: - return self.aug_test(imgs, img_metas, **kwargs) - - @auto_fp16(apply_to=('img', )) - def forward(self, img, img_metas, return_loss=True, **kwargs): - """Calls either :func:`forward_train` or :func:`forward_test` depending - on whether ``return_loss`` is ``True``. - - Note this setting will change the expected inputs. When - ``return_loss=True``, img and img_meta are single-nested (i.e. Tensor - and List[dict]), and when ``resturn_loss=False``, img and img_meta - should be double nested (i.e. List[Tensor], List[List[dict]]), with - the outer list indicating test time augmentations. - """ - if return_loss: - return self.forward_train(img, img_metas, **kwargs) - else: - return self.forward_test(img, img_metas, **kwargs) - - def train_step(self, data_batch, optimizer, **kwargs): - """The iteration step during training. - - This method defines an iteration step during training, except for the - back propagation and optimizer updating, which are done in an optimizer - hook. Note that in some complicated cases or models, the whole process - including back propagation and optimizer updating is also defined in - this method, such as GAN. - - Args: - data (dict): The output of dataloader. - optimizer (:obj:`torch.optim.Optimizer` | dict): The optimizer of - runner is passed to ``train_step()``. This argument is unused - and reserved. - - Returns: - dict: It should contain at least 3 keys: ``loss``, ``log_vars``, - ``num_samples``. - ``loss`` is a tensor for back propagation, which can be a - weighted sum of multiple losses. - ``log_vars`` contains all the variables to be sent to the - logger. - ``num_samples`` indicates the batch size (when the model is - DDP, it means the batch size on each GPU), which is used for - averaging the logs. - """ - losses = self(**data_batch) - loss, log_vars = self._parse_losses(losses) - - outputs = dict( - loss=loss, - log_vars=log_vars, - num_samples=len(data_batch['img_metas'])) - - return outputs - - def val_step(self, data_batch, **kwargs): - """The iteration step during validation. - - This method shares the same signature as :func:`train_step`, but used - during val epochs. Note that the evaluation after training epochs is - not implemented with this method, but an evaluation hook. - """ - output = self(**data_batch, **kwargs) - return output - - @staticmethod - def _parse_losses(losses): - """Parse the raw outputs (losses) of the network. - - Args: - losses (dict): Raw output of the network, which usually contain - losses and other necessary information. - - Returns: - tuple[Tensor, dict]: (loss, log_vars), loss is the loss tensor - which may be a weighted sum of all losses, log_vars contains - all the variables to be sent to the logger. - """ - log_vars = OrderedDict() - for loss_name, loss_value in losses.items(): - if isinstance(loss_value, torch.Tensor): - log_vars[loss_name] = loss_value.mean() - elif isinstance(loss_value, list): - log_vars[loss_name] = sum(_loss.mean() for _loss in loss_value) - else: - raise TypeError( - f'{loss_name} is not a tensor or list of tensors') - - loss = sum(_value for _key, _value in log_vars.items() - if 'loss' in _key) - - log_vars['loss'] = loss - for loss_name, loss_value in log_vars.items(): - # reduce loss when distributed training - if dist.is_available() and dist.is_initialized(): - loss_value = loss_value.data.clone() - dist.all_reduce(loss_value.div_(dist.get_world_size())) - log_vars[loss_name] = loss_value.item() - - return loss, log_vars - - def show_result(self, - img, - result, - palette=None, - win_name='', - show=False, - wait_time=0, - out_file=None, - opacity=0.5): - """Draw `result` over `img`. - - Args: - img (str or Tensor): The image to be displayed. - result (Tensor): The semantic segmentation results to draw over - `img`. - palette (list[list[int]]] | np.ndarray | None): The palette of - segmentation map. If None is given, random palette will be - generated. Default: None - win_name (str): The window name. - wait_time (int): Value of waitKey param. - Default: 0. - show (bool): Whether to show the image. - Default: False. - out_file (str or None): The filename to write the image. - Default: None. - opacity(float): Opacity of painted segmentation map. - Default 0.5. - Must be in (0, 1] range. - Returns: - img (Tensor): Only if not `show` or `out_file` - """ - img = mmcv.imread(img) - img = img.copy() - seg = result[0] - if palette is None: - if self.PALETTE is None: - palette = np.random.randint( - 0, 255, size=(len(self.CLASSES), 3)) - else: - palette = self.PALETTE - palette = np.array(palette) - assert palette.shape[0] == len(self.CLASSES) - assert palette.shape[1] == 3 - assert len(palette.shape) == 2 - assert 0 < opacity <= 1.0 - color_seg = np.zeros((seg.shape[0], seg.shape[1], 3), dtype=np.uint8) - for label, color in enumerate(palette): - color_seg[seg == label, :] = color - # convert to BGR - color_seg = color_seg[..., ::-1] - - img = img * (1 - opacity) + color_seg * opacity - img = img.astype(np.uint8) - # if out_file specified, do not show image in window - if out_file is not None: - show = False - - if show: - mmcv.imshow(img, win_name, wait_time) - if out_file is not None: - mmcv.imwrite(img, out_file) - - if not (show or out_file): - warnings.warn('show==False and out_file is not specified, only ' - 'result image will be returned') - return img diff --git a/spaces/daddyjin/TalkingFaceGeneration/Demo_TFR_Pirenderer/src/face3d/models/arcface_torch/configs/base.py b/spaces/daddyjin/TalkingFaceGeneration/Demo_TFR_Pirenderer/src/face3d/models/arcface_torch/configs/base.py deleted file mode 100644 index 78e4b36a9142b649ec39a8c59331bb2557f2ad57..0000000000000000000000000000000000000000 --- a/spaces/daddyjin/TalkingFaceGeneration/Demo_TFR_Pirenderer/src/face3d/models/arcface_torch/configs/base.py +++ /dev/null @@ -1,56 +0,0 @@ -from easydict import EasyDict as edict - -# make training faster -# our RAM is 256G -# mount -t tmpfs -o size=140G tmpfs /train_tmp - -config = edict() -config.loss = "arcface" -config.network = "r50" -config.resume = False -config.output = "ms1mv3_arcface_r50" - -config.dataset = "ms1m-retinaface-t1" -config.embedding_size = 512 -config.sample_rate = 1 -config.fp16 = False -config.momentum = 0.9 -config.weight_decay = 5e-4 -config.batch_size = 128 -config.lr = 0.1 # batch size is 512 - -if config.dataset == "emore": - config.rec = "/train_tmp/faces_emore" - config.num_classes = 85742 - config.num_image = 5822653 - config.num_epoch = 16 - config.warmup_epoch = -1 - config.decay_epoch = [8, 14, ] - config.val_targets = ["lfw", ] - -elif config.dataset == "ms1m-retinaface-t1": - config.rec = "/train_tmp/ms1m-retinaface-t1" - config.num_classes = 93431 - config.num_image = 5179510 - config.num_epoch = 25 - config.warmup_epoch = -1 - config.decay_epoch = [11, 17, 22] - config.val_targets = ["lfw", "cfp_fp", "agedb_30"] - -elif config.dataset == "glint360k": - config.rec = "/train_tmp/glint360k" - config.num_classes = 360232 - config.num_image = 17091657 - config.num_epoch = 20 - config.warmup_epoch = -1 - config.decay_epoch = [8, 12, 15, 18] - config.val_targets = ["lfw", "cfp_fp", "agedb_30"] - -elif config.dataset == "webface": - config.rec = "/train_tmp/faces_webface_112x112" - config.num_classes = 10572 - config.num_image = "forget" - config.num_epoch = 34 - config.warmup_epoch = -1 - config.decay_epoch = [20, 28, 32] - config.val_targets = ["lfw", "cfp_fp", "agedb_30"] diff --git a/spaces/danterivers/music-generation-samples/audiocraft/modules/__init__.py b/spaces/danterivers/music-generation-samples/audiocraft/modules/__init__.py deleted file mode 100644 index 81ba30f6466ff91b90490a4fb92f7d3d0d00144d..0000000000000000000000000000000000000000 --- a/spaces/danterivers/music-generation-samples/audiocraft/modules/__init__.py +++ /dev/null @@ -1,20 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -# flake8: noqa -from .conv import ( - NormConv1d, - NormConv2d, - NormConvTranspose1d, - NormConvTranspose2d, - StreamableConv1d, - StreamableConvTranspose1d, - pad_for_conv1d, - pad1d, - unpad1d, -) -from .lstm import StreamableLSTM -from .seanet import SEANetEncoder, SEANetDecoder diff --git a/spaces/danterivers/music-generation-samples/audiocraft/modules/activations.py b/spaces/danterivers/music-generation-samples/audiocraft/modules/activations.py deleted file mode 100644 index 8bd6f2917a56d72db56555d0ff54b2311bc21778..0000000000000000000000000000000000000000 --- a/spaces/danterivers/music-generation-samples/audiocraft/modules/activations.py +++ /dev/null @@ -1,96 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import torch -import torch.nn as nn -from torch import Tensor -from typing import Union, Callable - - -class CustomGLU(nn.Module): - """Custom Gated Linear Unit activation. - Applies a modified gated linear unit :math:`a * f(b)` where :math:`a` is the first half - of the input matrices, :math:`b` is the second half, and :math:`f` is a provided activation - function (i.e. sigmoid, swish, etc.). - - Args: - activation (nn.Module): The custom activation to apply in the Gated Linear Unit - dim (int): the dimension on which to split the input. Default: -1 - - Shape: - - Input: :math:`(\ast_1, N, \ast_2)` where `*` means, any number of additional - dimensions - - Output: :math:`(\ast_1, M, \ast_2)` where :math:`M=N/2` - - Examples:: - >>> m = CustomGLU(nn.Sigmoid()) - >>> input = torch.randn(4, 2) - >>> output = m(input) - """ - def __init__(self, activation: nn.Module, dim: int = -1): - super(CustomGLU, self).__init__() - self.dim = dim - self.activation = activation - - def forward(self, x: Tensor): - assert x.shape[self.dim] % 2 == 0 # M = N / 2 - a, b = torch.chunk(x, 2, dim=self.dim) - return a * self.activation(b) - - -class SwiGLU(CustomGLU): - """SiLU Gated Linear Unit activation. - Applies SiLU Gated Linear Unit :math:`a * SiLU(b)` where :math:`a` is - the first half of the input matrices, :math:`b` is the second half. - - Args: - dim (int): the dimension on which to split the input. Default: -1 - """ - def __init__(self, dim: int = -1): - super(SwiGLU, self).__init__(nn.SiLU(), dim) - - -class GeGLU(CustomGLU): - """GeLU Gated Linear Unit activation. - Applies GeLU Gated Linear Unit :math:`a * GELU(b)` where :math:`a` is - the first half of the input matrices, :math:`b` is the second half. - - Args: - dim (int): the dimension on which to split the input. Default: -1 - """ - def __init__(self, dim: int = -1): - super(GeGLU, self).__init__(nn.GELU(), dim) - - -class ReGLU(CustomGLU): - """ReLU Gated Linear Unit activation. - Applies ReLU Gated Linear Unit :math:`a * ReLU(b)` where :math:`a` is - the first half of the input matrices, :math:`b` is the second half. - - Args: - dim (int): the dimension on which to split the input. Default: -1 - """ - def __init__(self, dim: int = -1): - super(ReGLU, self).__init__(nn.ReLU(), dim) - - -def get_activation_fn( - activation: Union[str, Callable[[Tensor], Tensor]] -) -> Union[str, Callable[[Tensor], Tensor]]: - """Helper function to map an activation string to the activation class. - If the supplied activation is not a string that is recognized, the activation is passed back. - - Args: - activation (Union[str, Callable[[Tensor], Tensor]]): Activation to check - """ - if isinstance(activation, str): - if activation == "reglu": - return ReGLU() - elif activation == "geglu": - return GeGLU() - elif activation == "swiglu": - return SwiGLU() - return activation diff --git a/spaces/datasciencedojo/Text-Generator/app.py b/spaces/datasciencedojo/Text-Generator/app.py deleted file mode 100644 index 3c8e99b826793b72aaa7a3f430e9a0d5932e7fa8..0000000000000000000000000000000000000000 --- a/spaces/datasciencedojo/Text-Generator/app.py +++ /dev/null @@ -1,61 +0,0 @@ -import gradio as gr -from transformers import pipeline - -generator = pipeline('text-generation', model = 'gpt2') - -def text_generator(sample, max_length): - outputs = generator(sample, max_length = int(max_length), num_return_sequences=3) - return outputs[0]["generated_text"], outputs[1]["generated_text"], outputs[2]["generated_text"] - -examples = [["Hello, I'm a language model", "45"], ["Hello, I'm a designer", "30"]] - -css = """ -footer {display:none !important} -.output-markdown{display:none !important} -.gr-button-primary { - z-index: 14; - height: 43px; - width: 130px; - left: 0px; - top: 0px; - padding: 0px; - cursor: pointer !important; - background: none rgb(17, 20, 45) !important; - border: none !important; - text-align: center !important; - font-family: Poppins !important; - font-size: 14px !important; - font-weight: 500 !important; - color: rgb(255, 255, 255) !important; - line-height: 1 !important; - border-radius: 12px !important; - transition: box-shadow 200ms ease 0s, background 200ms ease 0s !important; - box-shadow: none !important; -} -.gr-button-primary:hover{ - z-index: 14; - height: 43px; - width: 130px; - left: 0px; - top: 0px; - padding: 0px; - cursor: pointer !important; - background: none rgb(37, 56, 133) !important; - border: none !important; - text-align: center !important; - font-family: Poppins !important; - font-size: 14px !important; - font-weight: 500 !important; - color: rgb(255, 255, 255) !important; - line-height: 1 !important; - border-radius: 12px !important; - transition: box-shadow 200ms ease 0s, background 200ms ease 0s !important; - box-shadow: rgb(0 0 0 / 23%) 0px 1px 7px 0px !important; -} -.hover\:bg-orange-50:hover { - --tw-bg-opacity: 1 !important; - background-color: rgb(229,225,255) !important; -} -""" -demo = gr.Interface(fn=text_generator, inputs=[gr.Textbox(lines=2, placeholder="Enter sample text here", label="Sample text"), gr.Textbox(lines=1, label="Length of generated text")], outputs=[gr.Textbox(label="Generated text 1"), gr.Textbox(label="Generated text 2"), gr.Textbox(label="Generated text 3")],title="Text Generator | Data Science Dojo", examples=examples, css=css) -demo.launch( debug = True ) \ No newline at end of file diff --git a/spaces/dawood17/SayBot_Enchancer/CodeFormer/basicsr/utils/registry.py b/spaces/dawood17/SayBot_Enchancer/CodeFormer/basicsr/utils/registry.py deleted file mode 100644 index 655753b3b9cbd0cfe73fe93a77cf1fcc3db6d827..0000000000000000000000000000000000000000 --- a/spaces/dawood17/SayBot_Enchancer/CodeFormer/basicsr/utils/registry.py +++ /dev/null @@ -1,82 +0,0 @@ -# Modified from: https://github.com/facebookresearch/fvcore/blob/master/fvcore/common/registry.py # noqa: E501 - - -class Registry(): - """ - The registry that provides name -> object mapping, to support third-party - users' custom modules. - - To create a registry (e.g. a backbone registry): - - .. code-block:: python - - BACKBONE_REGISTRY = Registry('BACKBONE') - - To register an object: - - .. code-block:: python - - @BACKBONE_REGISTRY.register() - class MyBackbone(): - ... - - Or: - - .. code-block:: python - - BACKBONE_REGISTRY.register(MyBackbone) - """ - - def __init__(self, name): - """ - Args: - name (str): the name of this registry - """ - self._name = name - self._obj_map = {} - - def _do_register(self, name, obj): - assert (name not in self._obj_map), (f"An object named '{name}' was already registered " - f"in '{self._name}' registry!") - self._obj_map[name] = obj - - def register(self, obj=None): - """ - Register the given object under the the name `obj.__name__`. - Can be used as either a decorator or not. - See docstring of this class for usage. - """ - if obj is None: - # used as a decorator - def deco(func_or_class): - name = func_or_class.__name__ - self._do_register(name, func_or_class) - return func_or_class - - return deco - - # used as a function call - name = obj.__name__ - self._do_register(name, obj) - - def get(self, name): - ret = self._obj_map.get(name) - if ret is None: - raise KeyError(f"No object named '{name}' found in '{self._name}' registry!") - return ret - - def __contains__(self, name): - return name in self._obj_map - - def __iter__(self): - return iter(self._obj_map.items()) - - def keys(self): - return self._obj_map.keys() - - -DATASET_REGISTRY = Registry('dataset') -ARCH_REGISTRY = Registry('arch') -MODEL_REGISTRY = Registry('model') -LOSS_REGISTRY = Registry('loss') -METRIC_REGISTRY = Registry('metric') diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/markdown_it/rules_core/normalize.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/markdown_it/rules_core/normalize.py deleted file mode 100644 index c9f8d0d5729b2497b5f4b611b0451dfe92872506..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/markdown_it/rules_core/normalize.py +++ /dev/null @@ -1,18 +0,0 @@ -"""Normalize input string.""" -import re - -from .state_core import StateCore - -# https://spec.commonmark.org/0.29/#line-ending -NEWLINES_RE = re.compile(r"\r\n?|\n") -NULL_RE = re.compile(r"\0") - - -def normalize(state: StateCore) -> None: - # Normalize newlines - string = NEWLINES_RE.sub("\n", state.src) - - # Replace NULL characters - string = NULL_RE.sub("\uFFFD", string) - - state.src = string diff --git a/spaces/dcq/freegpt-webui/client/js/chat.js b/spaces/dcq/freegpt-webui/client/js/chat.js deleted file mode 100644 index fc31aa603cd9f49afc318cfa5471298f4646b180..0000000000000000000000000000000000000000 --- a/spaces/dcq/freegpt-webui/client/js/chat.js +++ /dev/null @@ -1,514 +0,0 @@ -const query = (obj) => - Object.keys(obj) - .map((k) => encodeURIComponent(k) + "=" + encodeURIComponent(obj[k])) - .join("&"); -const markdown = window.markdownit(); -const message_box = document.getElementById(`messages`); -const message_input = document.getElementById(`message-input`); -const box_conversations = document.querySelector(`.top`); -const spinner = box_conversations.querySelector(".spinner"); -const stop_generating = document.querySelector(`.stop-generating`); -const send_button = document.querySelector(`#send-button`); -const user_image = `User Avatar`; -const gpt_image = `GPT Avatar`; -let prompt_lock = false; - -hljs.addPlugin(new CopyButtonPlugin()); - -message_input.addEventListener("blur", () => { - window.scrollTo(0, 0); -}); - -message_input.addEventListener("focus", () => { - document.documentElement.scrollTop = document.documentElement.scrollHeight; -}); - -const delete_conversations = async () => { - localStorage.clear(); - await new_conversation(); -}; - -const handle_ask = async () => { - message_input.style.height = `80px`; - window.scrollTo(0, 0); - let message = message_input.value; - - if (message.length > 0) { - message_input.value = ``; - message_input.dispatchEvent(new Event("input")); - await ask_gpt(message); - } -}; - -const remove_cancel_button = async () => { - stop_generating.classList.add(`stop-generating-hiding`); - - setTimeout(() => { - stop_generating.classList.remove(`stop-generating-hiding`); - stop_generating.classList.add(`stop-generating-hidden`); - }, 300); -}; - -const ask_gpt = async (message) => { - try { - message_input.value = ``; - message_input.innerHTML = ``; - message_input.innerText = ``; - - add_conversation(window.conversation_id, message.substr(0, 20)); - window.scrollTo(0, 0); - window.controller = new AbortController(); - - jailbreak = document.getElementById("jailbreak"); - model = document.getElementById("model"); - prompt_lock = true; - window.text = ``; - window.token = message_id(); - - stop_generating.classList.remove(`stop-generating-hidden`); - - add_user_message_box(message); - - message_box.scrollTop = message_box.scrollHeight; - window.scrollTo(0, 0); - await new Promise((r) => setTimeout(r, 500)); - window.scrollTo(0, 0); - - message_box.innerHTML += ` -
                                                                                                                                                -
                                                                                                                                                - ${gpt_image} -
                                                                                                                                                -
                                                                                                                                                -
                                                                                                                                                -
                                                                                                                                                -
                                                                                                                                                - `; - - message_box.scrollTop = message_box.scrollHeight; - window.scrollTo(0, 0); - await new Promise((r) => setTimeout(r, 1000)); - window.scrollTo(0, 0); - - const response = await fetch(`/backend-api/v2/conversation`, { - method: `POST`, - signal: window.controller.signal, - headers: { - "content-type": `application/json`, - accept: `text/event-stream`, - }, - body: JSON.stringify({ - conversation_id: window.conversation_id, - action: `_ask`, - model: model.options[model.selectedIndex].value, - jailbreak: jailbreak.options[jailbreak.selectedIndex].value, - meta: { - id: window.token, - content: { - conversation: await get_conversation(window.conversation_id), - internet_access: document.getElementById("switch").checked, - content_type: "text", - parts: [ - { - content: message, - role: "user", - }, - ], - }, - }, - }), - }); - - const reader = response.body.getReader(); - - while (true) { - const { value, done } = await reader.read(); - if (done) break; - - chunk = decodeUnicode(new TextDecoder().decode(value)); - - if (chunk.includes(`
                                                                                                                                                { - const messageDiv = document.createElement("div"); - messageDiv.classList.add("message"); - - const avatarContainer = document.createElement("div"); - avatarContainer.classList.add("avatar-container"); - avatarContainer.innerHTML = user_image; - - const contentDiv = document.createElement("div"); - contentDiv.classList.add("content"); - contentDiv.id = `user_${token}`; - contentDiv.innerText = message; - - messageDiv.appendChild(avatarContainer); - messageDiv.appendChild(contentDiv); - - message_box.appendChild(messageDiv); -}; - -const decodeUnicode = (str) => { - return str.replace(/\\u([a-fA-F0-9]{4})/g, function (match, grp) { - return String.fromCharCode(parseInt(grp, 16)); - }); -}; - -const clear_conversations = async () => { - const elements = box_conversations.childNodes; - let index = elements.length; - - if (index > 0) { - while (index--) { - const element = elements[index]; - if (element.nodeType === Node.ELEMENT_NODE && element.tagName.toLowerCase() !== `button`) { - box_conversations.removeChild(element); - } - } - } -}; - -const clear_conversation = async () => { - let messages = message_box.getElementsByTagName(`div`); - - while (messages.length > 0) { - message_box.removeChild(messages[0]); - } -}; - -const delete_conversation = async (conversation_id) => { - localStorage.removeItem(`conversation:${conversation_id}`); - - if (window.conversation_id == conversation_id) { - await new_conversation(); - } - - await load_conversations(20, 0, true); -}; - -const set_conversation = async (conversation_id) => { - history.pushState({}, null, `/chat/${conversation_id}`); - window.conversation_id = conversation_id; - - await clear_conversation(); - await load_conversation(conversation_id); - await load_conversations(20, 0, true); -}; - -const new_conversation = async () => { - history.pushState({}, null, `/chat/`); - window.conversation_id = uuid(); - - await clear_conversation(); - await load_conversations(20, 0, true); -}; - -const load_conversation = async (conversation_id) => { - let conversation = await JSON.parse(localStorage.getItem(`conversation:${conversation_id}`)); - console.log(conversation, conversation_id); - - for (item of conversation.items) { - if (is_assistant(item.role)) { - message_box.innerHTML += load_gpt_message_box(item.content); - } else { - message_box.innerHTML += load_user_message_box(item.content); - } - } - - document.querySelectorAll(`code`).forEach((el) => { - hljs.highlightElement(el); - }); - - message_box.scrollTo({ top: message_box.scrollHeight, behavior: "smooth" }); - - setTimeout(() => { - message_box.scrollTop = message_box.scrollHeight; - }, 500); -}; - -const load_user_message_box = (content) => { - const messageDiv = document.createElement("div"); - messageDiv.classList.add("message"); - - const avatarContainer = document.createElement("div"); - avatarContainer.classList.add("avatar-container"); - avatarContainer.innerHTML = user_image; - - const contentDiv = document.createElement("div"); - contentDiv.classList.add("content"); - contentDiv.innerText = content; - - messageDiv.appendChild(avatarContainer); - messageDiv.appendChild(contentDiv); - - return messageDiv.outerHTML; -}; - -const load_gpt_message_box = (content) => { - return ` -
                                                                                                                                                -
                                                                                                                                                - ${gpt_image} -
                                                                                                                                                -
                                                                                                                                                - ${markdown.render(content)} -
                                                                                                                                                -
                                                                                                                                                - `; -}; - -const is_assistant = (role) => { - return role == "assistant"; -}; - -const get_conversation = async (conversation_id) => { - let conversation = await JSON.parse(localStorage.getItem(`conversation:${conversation_id}`)); - return conversation.items; -}; - -const add_conversation = async (conversation_id, title) => { - if (localStorage.getItem(`conversation:${conversation_id}`) == null) { - localStorage.setItem( - `conversation:${conversation_id}`, - JSON.stringify({ - id: conversation_id, - title: title, - items: [], - }) - ); - } -}; - -const add_message = async (conversation_id, role, content) => { - before_adding = JSON.parse(localStorage.getItem(`conversation:${conversation_id}`)); - - before_adding.items.push({ - role: role, - content: content, - }); - - localStorage.setItem(`conversation:${conversation_id}`, JSON.stringify(before_adding)); // update conversation -}; - -const load_conversations = async (limit, offset, loader) => { - //console.log(loader); - //if (loader === undefined) box_conversations.appendChild(spinner); - - let conversations = []; - for (let i = 0; i < localStorage.length; i++) { - if (localStorage.key(i).startsWith("conversation:")) { - let conversation = localStorage.getItem(localStorage.key(i)); - conversations.push(JSON.parse(conversation)); - } - } - - //if (loader === undefined) spinner.parentNode.removeChild(spinner) - await clear_conversations(); - - for (conversation of conversations) { - box_conversations.innerHTML += ` -
                                                                                                                                                -
                                                                                                                                                - - ${conversation.title} -
                                                                                                                                                - -
                                                                                                                                                - `; - } - - document.querySelectorAll(`code`).forEach((el) => { - hljs.highlightElement(el); - }); -}; - -document.getElementById(`cancelButton`).addEventListener(`click`, async () => { - window.controller.abort(); - console.log(`aborted ${window.conversation_id}`); -}); - -function h2a(str1) { - var hex = str1.toString(); - var str = ""; - - for (var n = 0; n < hex.length; n += 2) { - str += String.fromCharCode(parseInt(hex.substr(n, 2), 16)); - } - - return str; -} - -const uuid = () => { - return `xxxxxxxx-xxxx-4xxx-yxxx-${Date.now().toString(16)}`.replace(/[xy]/g, function (c) { - var r = (Math.random() * 16) | 0, - v = c == "x" ? r : (r & 0x3) | 0x8; - return v.toString(16); - }); -}; - -const message_id = () => { - random_bytes = (Math.floor(Math.random() * 1338377565) + 2956589730).toString(2); - unix = Math.floor(Date.now() / 1000).toString(2); - - return BigInt(`0b${unix}${random_bytes}`).toString(); -}; - -window.onload = async () => { - load_settings_localstorage(); - - conversations = 0; - for (let i = 0; i < localStorage.length; i++) { - if (localStorage.key(i).startsWith("conversation:")) { - conversations += 1; - } - } - - if (conversations == 0) localStorage.clear(); - - await setTimeout(() => { - load_conversations(20, 0); - }, 1); - - if (!window.location.href.endsWith(`#`)) { - if (/\/chat\/.+/.test(window.location.href)) { - await load_conversation(window.conversation_id); - } - } - - message_input.addEventListener("keydown", async (evt) => { - if (prompt_lock) return; - - if (evt.key === "Enter" && !evt.shiftKey) { - evt.preventDefault(); - await handle_ask(); - } - }); - - send_button.addEventListener("click", async (event) => { - event.preventDefault(); - if (prompt_lock) return; - message_input.blur(); - await handle_ask(); - }); - - register_settings_localstorage(); -}; - -document.querySelector(".mobile-sidebar").addEventListener("click", (event) => { - const sidebar = document.querySelector(".sidebar"); - - if (sidebar.classList.contains("shown")) { - sidebar.classList.remove("shown"); - event.target.classList.remove("rotated"); - document.body.style.overflow = "auto"; - } else { - sidebar.classList.add("shown"); - event.target.classList.add("rotated"); - document.body.style.overflow = "hidden"; - } - - window.scrollTo(0, 0); -}); - -const register_settings_localstorage = async () => { - settings_ids = ["switch", "model", "jailbreak"]; - settings_elements = settings_ids.map((id) => document.getElementById(id)); - settings_elements.map((element) => - element.addEventListener(`change`, async (event) => { - switch (event.target.type) { - case "checkbox": - localStorage.setItem(event.target.id, event.target.checked); - break; - case "select-one": - localStorage.setItem(event.target.id, event.target.selectedIndex); - break; - default: - console.warn("Unresolved element type"); - } - }) - ); -}; - -const load_settings_localstorage = async () => { - settings_ids = ["switch", "model", "jailbreak"]; - settings_elements = settings_ids.map((id) => document.getElementById(id)); - settings_elements.map((element) => { - if (localStorage.getItem(element.id)) { - switch (element.type) { - case "checkbox": - element.checked = localStorage.getItem(element.id) === "true"; - break; - case "select-one": - element.selectedIndex = parseInt(localStorage.getItem(element.id)); - break; - default: - console.warn("Unresolved element type"); - } - } - }); -}; - -function clearTextarea(textarea) { - textarea.style.removeProperty("height"); - textarea.style.height = `${textarea.scrollHeight + 4}px`; - - if (textarea.value.trim() === "" && textarea.value.includes("\n")) { - textarea.value = ""; - } -} diff --git a/spaces/declare-lab/tango/audioldm/hifigan/__init__.py b/spaces/declare-lab/tango/audioldm/hifigan/__init__.py deleted file mode 100644 index e0ae476fe58c48e998c56234a55b871beba4042d..0000000000000000000000000000000000000000 --- a/spaces/declare-lab/tango/audioldm/hifigan/__init__.py +++ /dev/null @@ -1,7 +0,0 @@ -from .models import Generator - - -class AttrDict(dict): - def __init__(self, *args, **kwargs): - super(AttrDict, self).__init__(*args, **kwargs) - self.__dict__ = self diff --git a/spaces/declare-lab/tango/diffusers/src/diffusers/pipelines/stable_diffusion/pipeline_flax_stable_diffusion_inpaint.py b/spaces/declare-lab/tango/diffusers/src/diffusers/pipelines/stable_diffusion/pipeline_flax_stable_diffusion_inpaint.py deleted file mode 100644 index abb57f8b62e9aab62b7dc83329ab2a3c1f623532..0000000000000000000000000000000000000000 --- a/spaces/declare-lab/tango/diffusers/src/diffusers/pipelines/stable_diffusion/pipeline_flax_stable_diffusion_inpaint.py +++ /dev/null @@ -1,580 +0,0 @@ -# Copyright 2023 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import warnings -from functools import partial -from typing import Dict, List, Optional, Union - -import jax -import jax.numpy as jnp -import numpy as np -from flax.core.frozen_dict import FrozenDict -from flax.jax_utils import unreplicate -from flax.training.common_utils import shard -from packaging import version -from PIL import Image -from transformers import CLIPImageProcessor, CLIPTokenizer, FlaxCLIPTextModel - -from ...models import FlaxAutoencoderKL, FlaxUNet2DConditionModel -from ...schedulers import ( - FlaxDDIMScheduler, - FlaxDPMSolverMultistepScheduler, - FlaxLMSDiscreteScheduler, - FlaxPNDMScheduler, -) -from ...utils import PIL_INTERPOLATION, deprecate, logging, replace_example_docstring -from ..pipeline_flax_utils import FlaxDiffusionPipeline -from . import FlaxStableDiffusionPipelineOutput -from .safety_checker_flax import FlaxStableDiffusionSafetyChecker - - -logger = logging.get_logger(__name__) # pylint: disable=invalid-name - -# Set to True to use python for loop instead of jax.fori_loop for easier debugging -DEBUG = False - -EXAMPLE_DOC_STRING = """ - Examples: - ```py - >>> import jax - >>> import numpy as np - >>> from flax.jax_utils import replicate - >>> from flax.training.common_utils import shard - >>> import PIL - >>> import requests - >>> from io import BytesIO - >>> from diffusers import FlaxStableDiffusionInpaintPipeline - - - >>> def download_image(url): - ... response = requests.get(url) - ... return PIL.Image.open(BytesIO(response.content)).convert("RGB") - - - >>> img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png" - >>> mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png" - - >>> init_image = download_image(img_url).resize((512, 512)) - >>> mask_image = download_image(mask_url).resize((512, 512)) - - >>> pipeline, params = FlaxStableDiffusionInpaintPipeline.from_pretrained( - ... "xvjiarui/stable-diffusion-2-inpainting" - ... ) - - >>> prompt = "Face of a yellow cat, high resolution, sitting on a park bench" - >>> prng_seed = jax.random.PRNGKey(0) - >>> num_inference_steps = 50 - - >>> num_samples = jax.device_count() - >>> prompt = num_samples * [prompt] - >>> init_image = num_samples * [init_image] - >>> mask_image = num_samples * [mask_image] - >>> prompt_ids, processed_masked_images, processed_masks = pipeline.prepare_inputs( - ... prompt, init_image, mask_image - ... ) - # shard inputs and rng - - >>> params = replicate(params) - >>> prng_seed = jax.random.split(prng_seed, jax.device_count()) - >>> prompt_ids = shard(prompt_ids) - >>> processed_masked_images = shard(processed_masked_images) - >>> processed_masks = shard(processed_masks) - - >>> images = pipeline( - ... prompt_ids, processed_masks, processed_masked_images, params, prng_seed, num_inference_steps, jit=True - ... ).images - >>> images = pipeline.numpy_to_pil(np.asarray(images.reshape((num_samples,) + images.shape[-3:]))) - ``` -""" - - -class FlaxStableDiffusionInpaintPipeline(FlaxDiffusionPipeline): - r""" - Pipeline for text-guided image inpainting using Stable Diffusion. *This is an experimental feature*. - - This model inherits from [`FlaxDiffusionPipeline`]. Check the superclass documentation for the generic methods the - library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) - - Args: - vae ([`FlaxAutoencoderKL`]): - Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. - text_encoder ([`FlaxCLIPTextModel`]): - Frozen text-encoder. Stable Diffusion uses the text portion of - [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.FlaxCLIPTextModel), - specifically the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant. - tokenizer (`CLIPTokenizer`): - Tokenizer of class - [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer). - unet ([`FlaxUNet2DConditionModel`]): Conditional U-Net architecture to denoise the encoded image latents. - scheduler ([`SchedulerMixin`]): - A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of - [`FlaxDDIMScheduler`], [`FlaxLMSDiscreteScheduler`], [`FlaxPNDMScheduler`], or - [`FlaxDPMSolverMultistepScheduler`]. - safety_checker ([`FlaxStableDiffusionSafetyChecker`]): - Classification module that estimates whether generated images could be considered offensive or harmful. - Please, refer to the [model card](https://huggingface.co/runwayml/stable-diffusion-v1-5) for details. - feature_extractor ([`CLIPImageProcessor`]): - Model that extracts features from generated images to be used as inputs for the `safety_checker`. - """ - - def __init__( - self, - vae: FlaxAutoencoderKL, - text_encoder: FlaxCLIPTextModel, - tokenizer: CLIPTokenizer, - unet: FlaxUNet2DConditionModel, - scheduler: Union[ - FlaxDDIMScheduler, FlaxPNDMScheduler, FlaxLMSDiscreteScheduler, FlaxDPMSolverMultistepScheduler - ], - safety_checker: FlaxStableDiffusionSafetyChecker, - feature_extractor: CLIPImageProcessor, - dtype: jnp.dtype = jnp.float32, - ): - super().__init__() - self.dtype = dtype - - if safety_checker is None: - logger.warning( - f"You have disabled the safety checker for {self.__class__} by passing `safety_checker=None`. Ensure" - " that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered" - " results in services or applications open to the public. Both the diffusers team and Hugging Face" - " strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling" - " it only for use-cases that involve analyzing network behavior or auditing its results. For more" - " information, please have a look at https://github.com/huggingface/diffusers/pull/254 ." - ) - - is_unet_version_less_0_9_0 = hasattr(unet.config, "_diffusers_version") and version.parse( - version.parse(unet.config._diffusers_version).base_version - ) < version.parse("0.9.0.dev0") - is_unet_sample_size_less_64 = hasattr(unet.config, "sample_size") and unet.config.sample_size < 64 - if is_unet_version_less_0_9_0 and is_unet_sample_size_less_64: - deprecation_message = ( - "The configuration file of the unet has set the default `sample_size` to smaller than" - " 64 which seems highly unlikely .If you're checkpoint is a fine-tuned version of any of the" - " following: \n- CompVis/stable-diffusion-v1-4 \n- CompVis/stable-diffusion-v1-3 \n-" - " CompVis/stable-diffusion-v1-2 \n- CompVis/stable-diffusion-v1-1 \n- runwayml/stable-diffusion-v1-5" - " \n- runwayml/stable-diffusion-inpainting \n you should change 'sample_size' to 64 in the" - " configuration file. Please make sure to update the config accordingly as leaving `sample_size=32`" - " in the config might lead to incorrect results in future versions. If you have downloaded this" - " checkpoint from the Hugging Face Hub, it would be very nice if you could open a Pull request for" - " the `unet/config.json` file" - ) - deprecate("sample_size<64", "1.0.0", deprecation_message, standard_warn=False) - new_config = dict(unet.config) - new_config["sample_size"] = 64 - unet._internal_dict = FrozenDict(new_config) - - self.register_modules( - vae=vae, - text_encoder=text_encoder, - tokenizer=tokenizer, - unet=unet, - scheduler=scheduler, - safety_checker=safety_checker, - feature_extractor=feature_extractor, - ) - self.vae_scale_factor = 2 ** (len(self.vae.config.block_out_channels) - 1) - - def prepare_inputs( - self, - prompt: Union[str, List[str]], - image: Union[Image.Image, List[Image.Image]], - mask: Union[Image.Image, List[Image.Image]], - ): - if not isinstance(prompt, (str, list)): - raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}") - - if not isinstance(image, (Image.Image, list)): - raise ValueError(f"image has to be of type `PIL.Image.Image` or list but is {type(image)}") - - if isinstance(image, Image.Image): - image = [image] - - if not isinstance(mask, (Image.Image, list)): - raise ValueError(f"image has to be of type `PIL.Image.Image` or list but is {type(image)}") - - if isinstance(mask, Image.Image): - mask = [mask] - - processed_images = jnp.concatenate([preprocess_image(img, jnp.float32) for img in image]) - processed_masks = jnp.concatenate([preprocess_mask(m, jnp.float32) for m in mask]) - # processed_masks[processed_masks < 0.5] = 0 - processed_masks = processed_masks.at[processed_masks < 0.5].set(0) - # processed_masks[processed_masks >= 0.5] = 1 - processed_masks = processed_masks.at[processed_masks >= 0.5].set(1) - - processed_masked_images = processed_images * (processed_masks < 0.5) - - text_input = self.tokenizer( - prompt, - padding="max_length", - max_length=self.tokenizer.model_max_length, - truncation=True, - return_tensors="np", - ) - return text_input.input_ids, processed_masked_images, processed_masks - - def _get_has_nsfw_concepts(self, features, params): - has_nsfw_concepts = self.safety_checker(features, params) - return has_nsfw_concepts - - def _run_safety_checker(self, images, safety_model_params, jit=False): - # safety_model_params should already be replicated when jit is True - pil_images = [Image.fromarray(image) for image in images] - features = self.feature_extractor(pil_images, return_tensors="np").pixel_values - - if jit: - features = shard(features) - has_nsfw_concepts = _p_get_has_nsfw_concepts(self, features, safety_model_params) - has_nsfw_concepts = unshard(has_nsfw_concepts) - safety_model_params = unreplicate(safety_model_params) - else: - has_nsfw_concepts = self._get_has_nsfw_concepts(features, safety_model_params) - - images_was_copied = False - for idx, has_nsfw_concept in enumerate(has_nsfw_concepts): - if has_nsfw_concept: - if not images_was_copied: - images_was_copied = True - images = images.copy() - - images[idx] = np.zeros(images[idx].shape, dtype=np.uint8) # black image - - if any(has_nsfw_concepts): - warnings.warn( - "Potential NSFW content was detected in one or more images. A black image will be returned" - " instead. Try again with a different prompt and/or seed." - ) - - return images, has_nsfw_concepts - - def _generate( - self, - prompt_ids: jnp.array, - mask: jnp.array, - masked_image: jnp.array, - params: Union[Dict, FrozenDict], - prng_seed: jax.random.KeyArray, - num_inference_steps: int, - height: int, - width: int, - guidance_scale: float, - latents: Optional[jnp.array] = None, - neg_prompt_ids: Optional[jnp.array] = None, - ): - if height % 8 != 0 or width % 8 != 0: - raise ValueError(f"`height` and `width` have to be divisible by 8 but are {height} and {width}.") - - # get prompt text embeddings - prompt_embeds = self.text_encoder(prompt_ids, params=params["text_encoder"])[0] - - # TODO: currently it is assumed `do_classifier_free_guidance = guidance_scale > 1.0` - # implement this conditional `do_classifier_free_guidance = guidance_scale > 1.0` - batch_size = prompt_ids.shape[0] - - max_length = prompt_ids.shape[-1] - - if neg_prompt_ids is None: - uncond_input = self.tokenizer( - [""] * batch_size, padding="max_length", max_length=max_length, return_tensors="np" - ).input_ids - else: - uncond_input = neg_prompt_ids - negative_prompt_embeds = self.text_encoder(uncond_input, params=params["text_encoder"])[0] - context = jnp.concatenate([negative_prompt_embeds, prompt_embeds]) - - latents_shape = ( - batch_size, - self.vae.config.latent_channels, - height // self.vae_scale_factor, - width // self.vae_scale_factor, - ) - if latents is None: - latents = jax.random.normal(prng_seed, shape=latents_shape, dtype=self.dtype) - else: - if latents.shape != latents_shape: - raise ValueError(f"Unexpected latents shape, got {latents.shape}, expected {latents_shape}") - - prng_seed, mask_prng_seed = jax.random.split(prng_seed) - - masked_image_latent_dist = self.vae.apply( - {"params": params["vae"]}, masked_image, method=self.vae.encode - ).latent_dist - masked_image_latents = masked_image_latent_dist.sample(key=mask_prng_seed).transpose((0, 3, 1, 2)) - masked_image_latents = self.vae.config.scaling_factor * masked_image_latents - del mask_prng_seed - - mask = jax.image.resize(mask, (*mask.shape[:-2], *masked_image_latents.shape[-2:]), method="nearest") - - # 8. Check that sizes of mask, masked image and latents match - num_channels_latents = self.vae.config.latent_channels - num_channels_mask = mask.shape[1] - num_channels_masked_image = masked_image_latents.shape[1] - if num_channels_latents + num_channels_mask + num_channels_masked_image != self.unet.config.in_channels: - raise ValueError( - f"Incorrect configuration settings! The config of `pipeline.unet`: {self.unet.config} expects" - f" {self.unet.config.in_channels} but received `num_channels_latents`: {num_channels_latents} +" - f" `num_channels_mask`: {num_channels_mask} + `num_channels_masked_image`: {num_channels_masked_image}" - f" = {num_channels_latents+num_channels_masked_image+num_channels_mask}. Please verify the config of" - " `pipeline.unet` or your `mask_image` or `image` input." - ) - - def loop_body(step, args): - latents, mask, masked_image_latents, scheduler_state = args - # For classifier free guidance, we need to do two forward passes. - # Here we concatenate the unconditional and text embeddings into a single batch - # to avoid doing two forward passes - latents_input = jnp.concatenate([latents] * 2) - mask_input = jnp.concatenate([mask] * 2) - masked_image_latents_input = jnp.concatenate([masked_image_latents] * 2) - - t = jnp.array(scheduler_state.timesteps, dtype=jnp.int32)[step] - timestep = jnp.broadcast_to(t, latents_input.shape[0]) - - latents_input = self.scheduler.scale_model_input(scheduler_state, latents_input, t) - # concat latents, mask, masked_image_latents in the channel dimension - latents_input = jnp.concatenate([latents_input, mask_input, masked_image_latents_input], axis=1) - - # predict the noise residual - noise_pred = self.unet.apply( - {"params": params["unet"]}, - jnp.array(latents_input), - jnp.array(timestep, dtype=jnp.int32), - encoder_hidden_states=context, - ).sample - # perform guidance - noise_pred_uncond, noise_prediction_text = jnp.split(noise_pred, 2, axis=0) - noise_pred = noise_pred_uncond + guidance_scale * (noise_prediction_text - noise_pred_uncond) - - # compute the previous noisy sample x_t -> x_t-1 - latents, scheduler_state = self.scheduler.step(scheduler_state, noise_pred, t, latents).to_tuple() - return latents, mask, masked_image_latents, scheduler_state - - scheduler_state = self.scheduler.set_timesteps( - params["scheduler"], num_inference_steps=num_inference_steps, shape=latents.shape - ) - - # scale the initial noise by the standard deviation required by the scheduler - latents = latents * params["scheduler"].init_noise_sigma - - if DEBUG: - # run with python for loop - for i in range(num_inference_steps): - latents, mask, masked_image_latents, scheduler_state = loop_body( - i, (latents, mask, masked_image_latents, scheduler_state) - ) - else: - latents, _, _, _ = jax.lax.fori_loop( - 0, num_inference_steps, loop_body, (latents, mask, masked_image_latents, scheduler_state) - ) - - # scale and decode the image latents with vae - latents = 1 / self.vae.config.scaling_factor * latents - image = self.vae.apply({"params": params["vae"]}, latents, method=self.vae.decode).sample - - image = (image / 2 + 0.5).clip(0, 1).transpose(0, 2, 3, 1) - return image - - @replace_example_docstring(EXAMPLE_DOC_STRING) - def __call__( - self, - prompt_ids: jnp.array, - mask: jnp.array, - masked_image: jnp.array, - params: Union[Dict, FrozenDict], - prng_seed: jax.random.KeyArray, - num_inference_steps: int = 50, - height: Optional[int] = None, - width: Optional[int] = None, - guidance_scale: Union[float, jnp.array] = 7.5, - latents: jnp.array = None, - neg_prompt_ids: jnp.array = None, - return_dict: bool = True, - jit: bool = False, - ): - r""" - Function invoked when calling the pipeline for generation. - - Args: - prompt (`str` or `List[str]`): - The prompt or prompts to guide the image generation. - height (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor): - The height in pixels of the generated image. - width (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor): - The width in pixels of the generated image. - num_inference_steps (`int`, *optional*, defaults to 50): - The number of denoising steps. More denoising steps usually lead to a higher quality image at the - expense of slower inference. - guidance_scale (`float`, *optional*, defaults to 7.5): - Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598). - `guidance_scale` is defined as `w` of equation 2. of [Imagen - Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale > - 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`, - usually at the expense of lower image quality. - latents (`jnp.array`, *optional*): - Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image - generation. Can be used to tweak the same generation with different prompts. tensor will ge generated - by sampling using the supplied random `generator`. - jit (`bool`, defaults to `False`): - Whether to run `pmap` versions of the generation and safety scoring functions. NOTE: This argument - exists because `__call__` is not yet end-to-end pmap-able. It will be removed in a future release. - return_dict (`bool`, *optional*, defaults to `True`): - Whether or not to return a [`~pipelines.stable_diffusion.FlaxStableDiffusionPipelineOutput`] instead of - a plain tuple. - - Examples: - - Returns: - [`~pipelines.stable_diffusion.FlaxStableDiffusionPipelineOutput`] or `tuple`: - [`~pipelines.stable_diffusion.FlaxStableDiffusionPipelineOutput`] if `return_dict` is True, otherwise a - `tuple. When returning a tuple, the first element is a list with the generated images, and the second - element is a list of `bool`s denoting whether the corresponding generated image likely represents - "not-safe-for-work" (nsfw) content, according to the `safety_checker`. - """ - # 0. Default height and width to unet - height = height or self.unet.config.sample_size * self.vae_scale_factor - width = width or self.unet.config.sample_size * self.vae_scale_factor - - masked_image = jax.image.resize(masked_image, (*masked_image.shape[:-2], height, width), method="bicubic") - mask = jax.image.resize(mask, (*mask.shape[:-2], height, width), method="nearest") - - if isinstance(guidance_scale, float): - # Convert to a tensor so each device gets a copy. Follow the prompt_ids for - # shape information, as they may be sharded (when `jit` is `True`), or not. - guidance_scale = jnp.array([guidance_scale] * prompt_ids.shape[0]) - if len(prompt_ids.shape) > 2: - # Assume sharded - guidance_scale = guidance_scale[:, None] - - if jit: - images = _p_generate( - self, - prompt_ids, - mask, - masked_image, - params, - prng_seed, - num_inference_steps, - height, - width, - guidance_scale, - latents, - neg_prompt_ids, - ) - else: - images = self._generate( - prompt_ids, - mask, - masked_image, - params, - prng_seed, - num_inference_steps, - height, - width, - guidance_scale, - latents, - neg_prompt_ids, - ) - - if self.safety_checker is not None: - safety_params = params["safety_checker"] - images_uint8_casted = (images * 255).round().astype("uint8") - num_devices, batch_size = images.shape[:2] - - images_uint8_casted = np.asarray(images_uint8_casted).reshape(num_devices * batch_size, height, width, 3) - images_uint8_casted, has_nsfw_concept = self._run_safety_checker(images_uint8_casted, safety_params, jit) - images = np.asarray(images) - - # block images - if any(has_nsfw_concept): - for i, is_nsfw in enumerate(has_nsfw_concept): - if is_nsfw: - images[i] = np.asarray(images_uint8_casted[i]) - - images = images.reshape(num_devices, batch_size, height, width, 3) - else: - images = np.asarray(images) - has_nsfw_concept = False - - if not return_dict: - return (images, has_nsfw_concept) - - return FlaxStableDiffusionPipelineOutput(images=images, nsfw_content_detected=has_nsfw_concept) - - -# Static argnums are pipe, num_inference_steps, height, width. A change would trigger recompilation. -# Non-static args are (sharded) input tensors mapped over their first dimension (hence, `0`). -@partial( - jax.pmap, - in_axes=(None, 0, 0, 0, 0, 0, None, None, None, 0, 0, 0), - static_broadcasted_argnums=(0, 6, 7, 8), -) -def _p_generate( - pipe, - prompt_ids, - mask, - masked_image, - params, - prng_seed, - num_inference_steps, - height, - width, - guidance_scale, - latents, - neg_prompt_ids, -): - return pipe._generate( - prompt_ids, - mask, - masked_image, - params, - prng_seed, - num_inference_steps, - height, - width, - guidance_scale, - latents, - neg_prompt_ids, - ) - - -@partial(jax.pmap, static_broadcasted_argnums=(0,)) -def _p_get_has_nsfw_concepts(pipe, features, params): - return pipe._get_has_nsfw_concepts(features, params) - - -def unshard(x: jnp.ndarray): - # einops.rearrange(x, 'd b ... -> (d b) ...') - num_devices, batch_size = x.shape[:2] - rest = x.shape[2:] - return x.reshape(num_devices * batch_size, *rest) - - -def preprocess_image(image, dtype): - w, h = image.size - w, h = (x - x % 32 for x in (w, h)) # resize to integer multiple of 32 - image = image.resize((w, h), resample=PIL_INTERPOLATION["lanczos"]) - image = jnp.array(image).astype(dtype) / 255.0 - image = image[None].transpose(0, 3, 1, 2) - return 2.0 * image - 1.0 - - -def preprocess_mask(mask, dtype): - w, h = mask.size - w, h = (x - x % 32 for x in (w, h)) # resize to integer multiple of 32 - mask = mask.resize((w, h)) - mask = jnp.array(mask.convert("L")).astype(dtype) / 255.0 - mask = jnp.expand_dims(mask, axis=(0, 1)) - - return mask diff --git a/spaces/deepaksarika01/youtube-video-qa-lamini/utils.py b/spaces/deepaksarika01/youtube-video-qa-lamini/utils.py deleted file mode 100644 index 2e69d84750e28cd2d0cd651809325d4488c88c8d..0000000000000000000000000000000000000000 --- a/spaces/deepaksarika01/youtube-video-qa-lamini/utils.py +++ /dev/null @@ -1,41 +0,0 @@ -class LangChainChunker: - def __init__(self, text): - self.text = text - - def chunker(self, size=1000): - from langchain.text_splitter import CharacterTextSplitter - - # attach the duration of the video to the chunk - # [[chunk, duration]] - - text_splitter = CharacterTextSplitter( - separator=" ", - chunk_size=size, - chunk_overlap=0.9, - ) - - return text_splitter.split_text(self.text) - - def __sizeof__(self) -> int: - count = 0 - for _ in self.text: - count += 1 - return count - - -def getSubsText(video_id="", getGenerated=False): - from youtube_transcript_api import YouTubeTranscriptApi as ytapi - from youtube_transcript_api.formatters import TextFormatter - - tList = ytapi.list_transcripts(video_id) - data = "" - if getGenerated: - # TODO: implement getGenerated - pass - - for t in tList: - data = t.fetch() - - return (TextFormatter().format_transcript(data)).replace("\n", " ") - - diff --git a/spaces/deepklarity/poster2plot/train/README.md b/spaces/deepklarity/poster2plot/train/README.md deleted file mode 100644 index aef9e88696f0ce2ac85b8bfd7ec863cf6f98d425..0000000000000000000000000000000000000000 --- a/spaces/deepklarity/poster2plot/train/README.md +++ /dev/null @@ -1,9 +0,0 @@ -# Train new model - -- Download and extract the following datasets in a new folder called datasets: - - 1. [IMDb movies extensive dataset](https://www.kaggle.com/stefanoleone992/imdb-extensive-dataset) - 2. [48K IMDB Movies With Posters](https://www.kaggle.com/rezaunderfit/48k-imdb-movies-with-posters) - -- Run `create_dataset.ipynb` to create train.csv and valid.csv -- Run `train.ipynb` to train the model diff --git a/spaces/deepskyreal/ai-mixer-hotchpotch/sad_talker/src/face3d/options/base_options.py b/spaces/deepskyreal/ai-mixer-hotchpotch/sad_talker/src/face3d/options/base_options.py deleted file mode 100644 index d8f921d5a43434ae802a55a0fa3889c4b7ab9f6d..0000000000000000000000000000000000000000 --- a/spaces/deepskyreal/ai-mixer-hotchpotch/sad_talker/src/face3d/options/base_options.py +++ /dev/null @@ -1,169 +0,0 @@ -"""This script contains base options for Deep3DFaceRecon_pytorch -""" - -import argparse -import os -from util import util -import numpy as np -import torch -import face3d.models as models -import face3d.data as data - - -class BaseOptions(): - """This class defines options used during both training and test time. - - It also implements several helper functions such as parsing, printing, and saving the options. - It also gathers additional options defined in functions in both dataset class and model class. - """ - - def __init__(self, cmd_line=None): - """Reset the class; indicates the class hasn't been initailized""" - self.initialized = False - self.cmd_line = None - if cmd_line is not None: - self.cmd_line = cmd_line.split() - - def initialize(self, parser): - """Define the common options that are used in both training and test.""" - # basic parameters - parser.add_argument('--name', type=str, default='face_recon', help='name of the experiment. It decides where to store samples and models') - parser.add_argument('--gpu_ids', type=str, default='0', help='gpu ids: e.g. 0 0,1,2, 0,2. use -1 for CPU') - parser.add_argument('--checkpoints_dir', type=str, default='./checkpoints', help='models are saved here') - parser.add_argument('--vis_batch_nums', type=float, default=1, help='batch nums of images for visulization') - parser.add_argument('--eval_batch_nums', type=float, default=float('inf'), help='batch nums of images for evaluation') - parser.add_argument('--use_ddp', type=util.str2bool, nargs='?', const=True, default=True, help='whether use distributed data parallel') - parser.add_argument('--ddp_port', type=str, default='12355', help='ddp port') - parser.add_argument('--display_per_batch', type=util.str2bool, nargs='?', const=True, default=True, help='whether use batch to show losses') - parser.add_argument('--add_image', type=util.str2bool, nargs='?', const=True, default=True, help='whether add image to tensorboard') - parser.add_argument('--world_size', type=int, default=1, help='batch nums of images for evaluation') - - # model parameters - parser.add_argument('--model', type=str, default='facerecon', help='chooses which model to use.') - - # additional parameters - parser.add_argument('--epoch', type=str, default='latest', help='which epoch to load? set to latest to use latest cached model') - parser.add_argument('--verbose', action='store_true', help='if specified, print more debugging information') - parser.add_argument('--suffix', default='', type=str, help='customized suffix: opt.name = opt.name + suffix: e.g., {model}_{netG}_size{load_size}') - - self.initialized = True - return parser - - def gather_options(self): - """Initialize our parser with basic options(only once). - Add additional model-specific and dataset-specific options. - These options are defined in the function - in model and dataset classes. - """ - if not self.initialized: # check if it has been initialized - parser = argparse.ArgumentParser(formatter_class=argparse.ArgumentDefaultsHelpFormatter) - parser = self.initialize(parser) - - # get the basic options - if self.cmd_line is None: - opt, _ = parser.parse_known_args() - else: - opt, _ = parser.parse_known_args(self.cmd_line) - - # set cuda visible devices - os.environ['CUDA_VISIBLE_DEVICES'] = opt.gpu_ids - - # modify model-related parser options - model_name = opt.model - model_option_setter = models.get_option_setter(model_name) - parser = model_option_setter(parser, self.isTrain) - if self.cmd_line is None: - opt, _ = parser.parse_known_args() # parse again with new defaults - else: - opt, _ = parser.parse_known_args(self.cmd_line) # parse again with new defaults - - # modify dataset-related parser options - if opt.dataset_mode: - dataset_name = opt.dataset_mode - dataset_option_setter = data.get_option_setter(dataset_name) - parser = dataset_option_setter(parser, self.isTrain) - - # save and return the parser - self.parser = parser - if self.cmd_line is None: - return parser.parse_args() - else: - return parser.parse_args(self.cmd_line) - - def print_options(self, opt): - """Print and save options - - It will print both current options and default values(if different). - It will save options into a text file / [checkpoints_dir] / opt.txt - """ - message = '' - message += '----------------- Options ---------------\n' - for k, v in sorted(vars(opt).items()): - comment = '' - default = self.parser.get_default(k) - if v != default: - comment = '\t[default: %s]' % str(default) - message += '{:>25}: {:<30}{}\n'.format(str(k), str(v), comment) - message += '----------------- End -------------------' - print(message) - - # save to the disk - expr_dir = os.path.join(opt.checkpoints_dir, opt.name) - util.mkdirs(expr_dir) - file_name = os.path.join(expr_dir, '{}_opt.txt'.format(opt.phase)) - try: - with open(file_name, 'wt') as opt_file: - opt_file.write(message) - opt_file.write('\n') - except PermissionError as error: - print("permission error {}".format(error)) - pass - - def parse(self): - """Parse our options, create checkpoints directory suffix, and set up gpu device.""" - opt = self.gather_options() - opt.isTrain = self.isTrain # train or test - - # process opt.suffix - if opt.suffix: - suffix = ('_' + opt.suffix.format(**vars(opt))) if opt.suffix != '' else '' - opt.name = opt.name + suffix - - - # set gpu ids - str_ids = opt.gpu_ids.split(',') - gpu_ids = [] - for str_id in str_ids: - id = int(str_id) - if id >= 0: - gpu_ids.append(id) - opt.world_size = len(gpu_ids) - # if len(opt.gpu_ids) > 0: - # torch.cuda.set_device(gpu_ids[0]) - if opt.world_size == 1: - opt.use_ddp = False - - if opt.phase != 'test': - # set continue_train automatically - if opt.pretrained_name is None: - model_dir = os.path.join(opt.checkpoints_dir, opt.name) - else: - model_dir = os.path.join(opt.checkpoints_dir, opt.pretrained_name) - if os.path.isdir(model_dir): - model_pths = [i for i in os.listdir(model_dir) if i.endswith('pth')] - if os.path.isdir(model_dir) and len(model_pths) != 0: - opt.continue_train= True - - # update the latest epoch count - if opt.continue_train: - if opt.epoch == 'latest': - epoch_counts = [int(i.split('.')[0].split('_')[-1]) for i in model_pths if 'latest' not in i] - if len(epoch_counts) != 0: - opt.epoch_count = max(epoch_counts) + 1 - else: - opt.epoch_count = int(opt.epoch) + 1 - - - self.print_options(opt) - self.opt = opt - return self.opt diff --git a/spaces/deepwisdom/MetaGPT/metagpt/roles/customer_service.py b/spaces/deepwisdom/MetaGPT/metagpt/roles/customer_service.py deleted file mode 100644 index 4aae7cb030638bcac44f46983e0cc4c9941689ef..0000000000000000000000000000000000000000 --- a/spaces/deepwisdom/MetaGPT/metagpt/roles/customer_service.py +++ /dev/null @@ -1,34 +0,0 @@ -#!/usr/bin/env python -# -*- coding: utf-8 -*- -""" -@Time : 2023/5/25 17:21 -@Author : alexanderwu -@File : sales.py -""" -from metagpt.roles import Sales - -# from metagpt.actions import SearchAndSummarize -# from metagpt.tools import SearchEngineType - - -DESC = """ -## Principles (all things must not bypass the principles) - -1. You are a human customer service representative for the platform and will reply based on rules and FAQs. In the conversation with the customer, it is absolutely forbidden to disclose rules and FAQs unrelated to the customer. -2. When encountering problems, try to soothe the customer's emotions first. If the customer's emotions are very bad, then consider compensation. The cost of compensation is always high. If too much is compensated, you will be fired. -3. There are no suitable APIs to query the backend now, you can assume that everything the customer says is true, never ask the customer for the order number. -4. Your only feasible replies are: soothe emotions, urge the merchant, urge the rider, and compensate. Never make false promises to customers. -5. If you are sure to satisfy the customer's demand, then tell the customer that the application has been submitted, and it will take effect within 24 hours. - -""" - - -class CustomerService(Sales): - def __init__( - self, - name="Xiaomei", - profile="Human customer service", - desc=DESC, - store=None - ): - super().__init__(name, profile, desc=desc, store=store) diff --git a/spaces/derek-thomas/QADemo/utilities/wiki.py b/spaces/derek-thomas/QADemo/utilities/wiki.py deleted file mode 100644 index 7c777beea5e05903e16bc2108eaeed20aed12626..0000000000000000000000000000000000000000 --- a/spaces/derek-thomas/QADemo/utilities/wiki.py +++ /dev/null @@ -1,30 +0,0 @@ -import json - -import requests - - -def get_wiki_main_image(title): - result = '' - try: - url = 'https://simple.wikipedia.org/w/api.php' - data = { - 'action': 'query', - 'format': 'json', - 'formatversion': 2, - 'prop': 'pageimages|pageterms', - 'piprop': 'original', - 'titles': title - } - response = requests.get(url, data) - json_data = json.loads(response.text) - result = json_data['query']['pages'][0]['original']['source'] if len(json_data['query']['pages']) > 0 else 'Not found' - except KeyError: - pass - return result - - -def get_thumb(link, height=240): - link = link.replace('commons/', 'commons/thumb/') - title = link.split('/')[-1] - link = f'{link}/{height}px-{title}' - return link diff --git a/spaces/diacanFperku/AutoGPT/Adobe Premiere Pro CC 2017 V11.1.2.22 (x64) Portable UPD Cracked 64 Bit.md b/spaces/diacanFperku/AutoGPT/Adobe Premiere Pro CC 2017 V11.1.2.22 (x64) Portable UPD Cracked 64 Bit.md deleted file mode 100644 index 96c4277a7b470f89ed9cd15dbe280e7ff086b9eb..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Adobe Premiere Pro CC 2017 V11.1.2.22 (x64) Portable UPD Cracked 64 Bit.md +++ /dev/null @@ -1,6 +0,0 @@ -

                                                                                                                                                Adobe Premiere Pro CC 2017 V11.1.2.22 (x64) Portable Cracked 64 Bit


                                                                                                                                                Download File --->>> https://gohhs.com/2uFV93



                                                                                                                                                - -Devexpress 13.1.4 Patch Fdigallo Sangoku Pro. 2020.07.28 04:55 ... xforce keygen AutoCAD Mobile 2013 64 bit windows 7 · Adobe Premiere Pro CC 2017 v11.1.2.22 (x64) Portable Cracked free download · Face Software ... 4d29de3e1b
                                                                                                                                                -
                                                                                                                                                -
                                                                                                                                                -

                                                                                                                                                diff --git a/spaces/diacanFperku/AutoGPT/Ex4tomq4fullversion LINK.md b/spaces/diacanFperku/AutoGPT/Ex4tomq4fullversion LINK.md deleted file mode 100644 index 1babae7d4de411745ec8736345585b45b9f8ba71..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Ex4tomq4fullversion LINK.md +++ /dev/null @@ -1,27 +0,0 @@ - -

                                                                                                                                                How to Convert EX4 to MQ4 Files with EX4 TO MQ4 Decompiler ORG

                                                                                                                                                -

                                                                                                                                                If you are a forex trader who uses MetaTrader 4 or 5 platform, you may have encountered some expert advisors or indicators that are in EX4 or EX5 format. These are compiled files that contain the executable code of your trading strategies. However, sometimes you may want to modify, optimize, or debug these files, and for that you need the source code in MQ4 or MQ5 format.

                                                                                                                                                -

                                                                                                                                                ex4tomq4fullversion


                                                                                                                                                Download Ziphttps://gohhs.com/2uFTjw



                                                                                                                                                -

                                                                                                                                                Fortunately, there is a solution for that: EX4 TO MQ4 Decompiler ORG. This is a website that provides the best quality of decompilation for MT4 and MT5 forex experts and indicators. With this service, you can easily get any source code from EX4 or EX5 files with a few clicks. You can also remove any limitations such as time limit, account limit, platform limit, or MQL5 market code limit from your files.

                                                                                                                                                -

                                                                                                                                                In this article, we will show you how to use EX4 TO MQ4 Decompiler ORG to convert your EX4 or EX5 files to MQ4 or MQ5 files.

                                                                                                                                                -

                                                                                                                                                Step 1: Visit the website

                                                                                                                                                -

                                                                                                                                                The first step is to visit the website of EX4 TO MQ4 Decompiler ORG at https://ex4tomq4.org/. Here you will see an introduction video and some testimonials from satisfied customers. You will also see the online shop where you can choose your desired service.

                                                                                                                                                -

                                                                                                                                                -

                                                                                                                                                Step 2: Choose your service

                                                                                                                                                -

                                                                                                                                                The next step is to choose the service that suits your needs. You can select from three options:

                                                                                                                                                -
                                                                                                                                                  -
                                                                                                                                                • EX4 TO MQ4 Decompiler: This option allows you to decompile any EX4 file to MQ4 file. The price is $199 per file.
                                                                                                                                                • -
                                                                                                                                                • EX5 TO MQ5 Decompiler: This option allows you to decompile any EX5 file to MQ5 file. The price is $299 per file.
                                                                                                                                                • -
                                                                                                                                                • Remove Limit: This option allows you to remove any limit from your EX4 or EX5 file, such as time limit, account limit, platform limit, or MQL5 market code limit. The price is $99 per file.
                                                                                                                                                • -
                                                                                                                                                -

                                                                                                                                                You can also combine these options if you want. For example, if you want to decompile an EX4 file and remove its time limit, you can select both options and pay $298 in total.

                                                                                                                                                -

                                                                                                                                                Step 3: Pay your order

                                                                                                                                                -

                                                                                                                                                After choosing your service, you need to pay your order using one of the available payment methods. You can use PayPal, Bitcoin, Skrill, Neteller, Perfect Money, or WebMoney. Once you complete the payment, you will receive an extraction code in your email.

                                                                                                                                                -

                                                                                                                                                Step 4: Send your file and extraction code

                                                                                                                                                -

                                                                                                                                                The next step is to send your EX4 or EX5 file and your extraction code to the email address mail@ex4tomq4.cc. You can attach your file as an attachment or upload it to a cloud service such as Google Drive or Dropbox and share the link. Make sure to include your extraction code in the email body so that they can verify your order.

                                                                                                                                                -

                                                                                                                                                Step 5: Receive your source code

                                                                                                                                                -

                                                                                                                                                The final step is to wait for your source code to be delivered. The processing time depends on the complexity of your file and the number of orders in queue. Usually it takes between 24 hours and 72 hours. Once your file is decompiled and/or patched, you will receive an email with a download link for your MQ4 or MQ5 file. You can then download it and use it as you wish.

                                                                                                                                                -

                                                                                                                                                Conclusion

                                                                                                                                                -

                                                                                                                                                EX4 TO MQ4 Decompiler ORG is a reliable and professional service that can help you convert your EX4 or EX5 files to MQ4 or MQ5 files with high quality and fast speed. You can also remove any limitations from your files and enjoy unlimited trading possibilities. If you are looking for a way to decompile your forex experts or indicators, look

                                                                                                                                                d5da3c52bf
                                                                                                                                                -
                                                                                                                                                -
                                                                                                                                                \ No newline at end of file diff --git a/spaces/diacanFperku/AutoGPT/Guillermo Floris Margadant Derecho Privado Romano.pdf TOP.md b/spaces/diacanFperku/AutoGPT/Guillermo Floris Margadant Derecho Privado Romano.pdf TOP.md deleted file mode 100644 index 3efaedffe0f97fe9d71bdb04b8d8bf2d05bcecda..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Guillermo Floris Margadant Derecho Privado Romano.pdf TOP.md +++ /dev/null @@ -1,6 +0,0 @@ -

                                                                                                                                                Guillermo Floris Margadant Derecho Privado Romano.pdf


                                                                                                                                                DOWNLOADhttps://gohhs.com/2uFU2n



                                                                                                                                                - -1941. Sorbin and Zonca, ria, Rumania. Bibl. ica Americana, 1949. r. Its vication to the What doth Libertine in and over in an world? 2003. Hippocratic Corpus. eaay Enlivening the Truth. University Press of America. It may carry between 2 and 3 goals. Huijgens, J. Philosophy and Literature in the Classical Age of the American Spirit. The University of Michigan Press. 228 Beatrice, Florence, and Riley L. Ignatiev, l. A Literary History of the origins of the West. Cambridge University Press. 1996. In the seventeenth and eighteenth pages, and every generation is led to be one in a long of access times. Why does, Margadant, G. The Concept of God in the Arts. Dover Publications. The in the series was used as to the future in such a way as to be a Declaration of Independence and a religious decision. yet not quite, parable out of use. The Macropus of the Myth of the Sophists, the Here of the life, and the Autopoietic activation. The University of California Press. O, in his light, if they was, however, the vica-tive one of the enquiry of Christ. To do social, the community did removed to a vica-tive one in s. Philosophy of Literature. Harvard University Press. Southern California University of America. And not not in the Religious writings and vica-tive books is the source of the mind not in Christ, but the subsequent application which was the offering that was the citadel of the futu-re. As this Papacy to our act of Our Lord Jesus Christ, which does the it the free. The Anthropology of the Earth. Harvard University Press. But I have also to get upon you that were not. He could make to be possible. If it is, its submission is to assume the opinions that are vica-tive to his perspective and any of his artificial experiences and theses which have in the political Spirit. I suppose in me, the price of the foundation which is the red of my parable and my spiritual thesis. This is me to the vica-tive laity. ln it, Margadant, G. The Concept of God in the Arts. Dover Publications. A Critique of History and a vica-tive Theology of is to make. In 4fefd39f24
                                                                                                                                                -
                                                                                                                                                -
                                                                                                                                                -

                                                                                                                                                diff --git a/spaces/diacanFperku/AutoGPT/LINK Download Xbox 360 Tools 6.0.0.9l.md b/spaces/diacanFperku/AutoGPT/LINK Download Xbox 360 Tools 6.0.0.9l.md deleted file mode 100644 index 60a5e9be8a11a9c168203876148eead1dd6c84fd..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/LINK Download Xbox 360 Tools 6.0.0.9l.md +++ /dev/null @@ -1,6 +0,0 @@ -

                                                                                                                                                Download Xbox 360 Tools 6.0.0.9l


                                                                                                                                                Download Zip »»» https://gohhs.com/2uFTOe



                                                                                                                                                - -Download Full Ebook Here - https://tinyurl.com/y3cb6pbk . . Moartea unui ... Download Xbox 360 Tools 6.0.0.9l · CutScenes Turbo Patch 117 34 1fdad05405
                                                                                                                                                -
                                                                                                                                                -
                                                                                                                                                -

                                                                                                                                                diff --git a/spaces/diacanFperku/AutoGPT/SamDrivers 15.4 DVD Edition Free Download [HOT].md b/spaces/diacanFperku/AutoGPT/SamDrivers 15.4 DVD Edition Free Download [HOT].md deleted file mode 100644 index 3fda90441625c8ce97582070d0e4a5b9c78f9aee..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/SamDrivers 15.4 DVD Edition Free Download [HOT].md +++ /dev/null @@ -1,6 +0,0 @@ -

                                                                                                                                                SamDrivers 15.4 DVD Edition Free Download


                                                                                                                                                Download ►►► https://gohhs.com/2uFTkX



                                                                                                                                                - -SamDrivers 15 SamDrivers Free Download ISO Latest Version for Windows. It is full offline installer standalone setup of SamDrivers ISO for ... 1fdad05405
                                                                                                                                                -
                                                                                                                                                -
                                                                                                                                                -

                                                                                                                                                diff --git a/spaces/dineshreddy/WALT/mmdet/models/detectors/cornernet.py b/spaces/dineshreddy/WALT/mmdet/models/detectors/cornernet.py deleted file mode 100644 index bb8ccc1465ab66d1615ca16701a533a22b156295..0000000000000000000000000000000000000000 --- a/spaces/dineshreddy/WALT/mmdet/models/detectors/cornernet.py +++ /dev/null @@ -1,95 +0,0 @@ -import torch - -from mmdet.core import bbox2result, bbox_mapping_back -from ..builder import DETECTORS -from .single_stage import SingleStageDetector - - -@DETECTORS.register_module() -class CornerNet(SingleStageDetector): - """CornerNet. - - This detector is the implementation of the paper `CornerNet: Detecting - Objects as Paired Keypoints `_ . - """ - - def __init__(self, - backbone, - neck, - bbox_head, - train_cfg=None, - test_cfg=None, - pretrained=None): - super(CornerNet, self).__init__(backbone, neck, bbox_head, train_cfg, - test_cfg, pretrained) - - def merge_aug_results(self, aug_results, img_metas): - """Merge augmented detection bboxes and score. - - Args: - aug_results (list[list[Tensor]]): Det_bboxes and det_labels of each - image. - img_metas (list[list[dict]]): Meta information of each image, e.g., - image size, scaling factor, etc. - - Returns: - tuple: (bboxes, labels) - """ - recovered_bboxes, aug_labels = [], [] - for bboxes_labels, img_info in zip(aug_results, img_metas): - img_shape = img_info[0]['img_shape'] # using shape before padding - scale_factor = img_info[0]['scale_factor'] - flip = img_info[0]['flip'] - bboxes, labels = bboxes_labels - bboxes, scores = bboxes[:, :4], bboxes[:, -1:] - bboxes = bbox_mapping_back(bboxes, img_shape, scale_factor, flip) - recovered_bboxes.append(torch.cat([bboxes, scores], dim=-1)) - aug_labels.append(labels) - - bboxes = torch.cat(recovered_bboxes, dim=0) - labels = torch.cat(aug_labels) - - if bboxes.shape[0] > 0: - out_bboxes, out_labels = self.bbox_head._bboxes_nms( - bboxes, labels, self.bbox_head.test_cfg) - else: - out_bboxes, out_labels = bboxes, labels - - return out_bboxes, out_labels - - def aug_test(self, imgs, img_metas, rescale=False): - """Augment testing of CornerNet. - - Args: - imgs (list[Tensor]): Augmented images. - img_metas (list[list[dict]]): Meta information of each image, e.g., - image size, scaling factor, etc. - rescale (bool): If True, return boxes in original image space. - Default: False. - - Note: - ``imgs`` must including flipped image pairs. - - Returns: - list[list[np.ndarray]]: BBox results of each image and classes. - The outer list corresponds to each image. The inner list - corresponds to each class. - """ - img_inds = list(range(len(imgs))) - - assert img_metas[0][0]['flip'] + img_metas[1][0]['flip'], ( - 'aug test must have flipped image pair') - aug_results = [] - for ind, flip_ind in zip(img_inds[0::2], img_inds[1::2]): - img_pair = torch.cat([imgs[ind], imgs[flip_ind]]) - x = self.extract_feat(img_pair) - outs = self.bbox_head(x) - bbox_list = self.bbox_head.get_bboxes( - *outs, [img_metas[ind], img_metas[flip_ind]], False, False) - aug_results.append(bbox_list[0]) - aug_results.append(bbox_list[1]) - - bboxes, labels = self.merge_aug_results(aug_results, img_metas) - bbox_results = bbox2result(bboxes, labels, self.bbox_head.num_classes) - - return [bbox_results] diff --git a/spaces/doluvor/faster-whisper-webui/src/vad.py b/spaces/doluvor/faster-whisper-webui/src/vad.py deleted file mode 100644 index e68ee7391e93f539a05d548601f2d87168bb1282..0000000000000000000000000000000000000000 --- a/spaces/doluvor/faster-whisper-webui/src/vad.py +++ /dev/null @@ -1,568 +0,0 @@ -from abc import ABC, abstractmethod -from collections import Counter, deque -import time - -from typing import Any, Deque, Iterator, List, Dict - -from pprint import pprint -from src.hooks.progressListener import ProgressListener -from src.hooks.subTaskProgressListener import SubTaskProgressListener -from src.hooks.whisperProgressHook import create_progress_listener_handle -from src.modelCache import GLOBAL_MODEL_CACHE, ModelCache - -from src.segments import merge_timestamps -from src.whisper.abstractWhisperContainer import AbstractWhisperCallback - -# Workaround for https://github.com/tensorflow/tensorflow/issues/48797 -try: - import tensorflow as tf -except ModuleNotFoundError: - # Error handling - pass - -import torch - -import ffmpeg -import numpy as np - -from src.utils import format_timestamp -from enum import Enum - -class NonSpeechStrategy(Enum): - """ - Ignore non-speech frames segments. - """ - SKIP = 1 - """ - Just treat non-speech segments as speech. - """ - CREATE_SEGMENT = 2 - """ - Expand speech segments into subsequent non-speech segments. - """ - EXPAND_SEGMENT = 3 - -# Defaults for Silero -SPEECH_TRESHOLD = 0.3 - -# Minimum size of segments to process -MIN_SEGMENT_DURATION = 1 - -# The maximum time for texts from old segments to be used in the next segment -MAX_PROMPT_WINDOW = 0 # seconds (0 = disabled) -PROMPT_NO_SPEECH_PROB = 0.1 # Do not pass the text from segments with a no speech probability higher than this - -VAD_MAX_PROCESSING_CHUNK = 60 * 60 # 60 minutes of audio - -class TranscriptionConfig(ABC): - def __init__(self, non_speech_strategy: NonSpeechStrategy = NonSpeechStrategy.SKIP, - segment_padding_left: float = None, segment_padding_right = None, max_silent_period: float = None, - max_merge_size: float = None, max_prompt_window: float = None, initial_segment_index = -1): - self.non_speech_strategy = non_speech_strategy - self.segment_padding_left = segment_padding_left - self.segment_padding_right = segment_padding_right - self.max_silent_period = max_silent_period - self.max_merge_size = max_merge_size - self.max_prompt_window = max_prompt_window - self.initial_segment_index = initial_segment_index - -class PeriodicTranscriptionConfig(TranscriptionConfig): - def __init__(self, periodic_duration: float, non_speech_strategy: NonSpeechStrategy = NonSpeechStrategy.SKIP, - segment_padding_left: float = None, segment_padding_right = None, max_silent_period: float = None, - max_merge_size: float = None, max_prompt_window: float = None, initial_segment_index = -1): - super().__init__(non_speech_strategy, segment_padding_left, segment_padding_right, max_silent_period, max_merge_size, max_prompt_window, initial_segment_index) - self.periodic_duration = periodic_duration - -class AbstractTranscription(ABC): - def __init__(self, sampling_rate: int = 16000): - self.sampling_rate = sampling_rate - - def get_audio_segment(self, str, start_time: str = None, duration: str = None): - return load_audio(str, self.sampling_rate, start_time, duration) - - def is_transcribe_timestamps_fast(self): - """ - Determine if get_transcribe_timestamps is fast enough to not need parallelization. - """ - return False - - @abstractmethod - def get_transcribe_timestamps(self, audio: str, config: TranscriptionConfig, start_time: float, end_time: float): - """ - Get the start and end timestamps of the sections that should be transcribed by this VAD method. - - Parameters - ---------- - audio: str - The audio file. - config: TranscriptionConfig - The transcription configuration. - - Returns - ------- - A list of start and end timestamps, in fractional seconds. - """ - return - - def get_merged_timestamps(self, timestamps: List[Dict[str, Any]], config: TranscriptionConfig, total_duration: float): - """ - Get the start and end timestamps of the sections that should be transcribed by this VAD method, - after merging the given segments using the specified configuration. - - Parameters - ---------- - audio: str - The audio file. - config: TranscriptionConfig - The transcription configuration. - - Returns - ------- - A list of start and end timestamps, in fractional seconds. - """ - merged = merge_timestamps(timestamps, config.max_silent_period, config.max_merge_size, - config.segment_padding_left, config.segment_padding_right) - - if config.non_speech_strategy != NonSpeechStrategy.SKIP: - # Expand segments to include the gaps between them - if (config.non_speech_strategy == NonSpeechStrategy.CREATE_SEGMENT): - # When we have a prompt window, we create speech segments betwen each segment if we exceed the merge size - merged = self.fill_gaps(merged, total_duration=total_duration, max_expand_size=config.max_merge_size) - elif config.non_speech_strategy == NonSpeechStrategy.EXPAND_SEGMENT: - # With no prompt window, it is better to just expand the segments (this effectively passes the prompt to the next segment) - merged = self.expand_gaps(merged, total_duration=total_duration) - else: - raise Exception("Unknown non-speech strategy: " + str(config.non_speech_strategy)) - - print("Transcribing non-speech:") - pprint(merged) - return merged - - def transcribe(self, audio: str, whisperCallable: AbstractWhisperCallback, config: TranscriptionConfig, - progressListener: ProgressListener = None): - """ - Transcribe the given audo file. - - Parameters - ---------- - audio: str - The audio file. - whisperCallable: WhisperCallback - A callback object to call to transcribe each segment. - - Returns - ------- - A list of start and end timestamps, in fractional seconds. - """ - - try: - max_audio_duration = self.get_audio_duration(audio, config) - timestamp_segments = self.get_transcribe_timestamps(audio, config, 0, max_audio_duration) - - # Get speech timestamps from full audio file - merged = self.get_merged_timestamps(timestamp_segments, config, max_audio_duration) - - # A deque of transcribed segments that is passed to the next segment as a prompt - prompt_window = deque() - - print("Processing timestamps:") - pprint(merged) - - result = { - 'text': "", - 'segments': [], - 'language': "" - } - languageCounter = Counter() - detected_language = None - - segment_index = config.initial_segment_index - - # Calculate progress - progress_start_offset = merged[0]['start'] if len(merged) > 0 else 0 - progress_total_duration = sum([segment['end'] - segment['start'] for segment in merged]) - - # For each time segment, run whisper - for segment in merged: - segment_index += 1 - segment_start = segment['start'] - segment_end = segment['end'] - segment_expand_amount = segment.get('expand_amount', 0) - segment_gap = segment.get('gap', False) - - segment_duration = segment_end - segment_start - - if segment_duration < MIN_SEGMENT_DURATION: - continue - - # Audio to run on Whisper - segment_audio = self.get_audio_segment(audio, start_time = str(segment_start), duration = str(segment_duration)) - # Previous segments to use as a prompt - segment_prompt = ' '.join([segment['text'] for segment in prompt_window]) if len(prompt_window) > 0 else None - - # Detected language - detected_language = languageCounter.most_common(1)[0][0] if len(languageCounter) > 0 else None - - print("Running whisper from ", format_timestamp(segment_start), " to ", format_timestamp(segment_end), ", duration: ", - segment_duration, "expanded: ", segment_expand_amount, "prompt: ", segment_prompt, "language: ", detected_language) - - perf_start_time = time.perf_counter() - - scaled_progress_listener = SubTaskProgressListener(progressListener, base_task_total=progress_total_duration, - sub_task_start=segment_start - progress_start_offset, sub_task_total=segment_duration) - segment_result = whisperCallable.invoke(segment_audio, segment_index, segment_prompt, detected_language, progress_listener=scaled_progress_listener) - - perf_end_time = time.perf_counter() - print("Whisper took {} seconds".format(perf_end_time - perf_start_time)) - - adjusted_segments = self.adjust_timestamp(segment_result["segments"], adjust_seconds=segment_start, max_source_time=segment_duration) - - # Propagate expand amount to the segments - if (segment_expand_amount > 0): - segment_without_expansion = segment_duration - segment_expand_amount - - for adjusted_segment in adjusted_segments: - adjusted_segment_end = adjusted_segment['end'] - - # Add expand amount if the segment got expanded - if (adjusted_segment_end > segment_without_expansion): - adjusted_segment["expand_amount"] = adjusted_segment_end - segment_without_expansion - - # Append to output - result['text'] += segment_result['text'] - result['segments'].extend(adjusted_segments) - - # Increment detected language - if not segment_gap: - languageCounter[segment_result['language']] += 1 - - # Update prompt window - self.__update_prompt_window(prompt_window, adjusted_segments, segment_end, segment_gap, config) - - if detected_language is not None: - result['language'] = detected_language - finally: - # Notify progress listener that we are done - if progressListener is not None: - progressListener.on_finished() - return result - - def get_audio_duration(self, audio: str, config: TranscriptionConfig): - return get_audio_duration(audio) - - def __update_prompt_window(self, prompt_window: Deque, adjusted_segments: List, segment_end: float, segment_gap: bool, config: TranscriptionConfig): - if (config.max_prompt_window is not None and config.max_prompt_window > 0): - # Add segments to the current prompt window (unless it is a speech gap) - if not segment_gap: - for segment in adjusted_segments: - if segment.get('no_speech_prob', 0) <= PROMPT_NO_SPEECH_PROB: - prompt_window.append(segment) - - while (len(prompt_window) > 0): - first_end_time = prompt_window[0].get('end', 0) - # Time expanded in the segments should be discounted from the prompt window - first_expand_time = prompt_window[0].get('expand_amount', 0) - - if (first_end_time - first_expand_time < segment_end - config.max_prompt_window): - prompt_window.popleft() - else: - break - - def include_gaps(self, segments: Iterator[dict], min_gap_length: float, total_duration: float): - result = [] - last_end_time = 0 - - for segment in segments: - segment_start = float(segment['start']) - segment_end = float(segment['end']) - - if (last_end_time != segment_start): - delta = segment_start - last_end_time - - if (min_gap_length is None or delta >= min_gap_length): - result.append( { 'start': last_end_time, 'end': segment_start, 'gap': True } ) - - last_end_time = segment_end - result.append(segment) - - # Also include total duration if specified - if (total_duration is not None and last_end_time < total_duration): - delta = total_duration - segment_start - - if (min_gap_length is None or delta >= min_gap_length): - result.append( { 'start': last_end_time, 'end': total_duration, 'gap': True } ) - - return result - - # Expand the end time of each segment to the start of the next segment - def expand_gaps(self, segments: List[Dict[str, Any]], total_duration: float): - result = [] - - if len(segments) == 0: - return result - - # Add gap at the beginning if needed - if (segments[0]['start'] > 0): - result.append({ 'start': 0, 'end': segments[0]['start'], 'gap': True } ) - - for i in range(len(segments) - 1): - current_segment = segments[i] - next_segment = segments[i + 1] - - delta = next_segment['start'] - current_segment['end'] - - # Expand if the gap actually exists - if (delta >= 0): - current_segment = current_segment.copy() - current_segment['expand_amount'] = delta - current_segment['end'] = next_segment['start'] - - result.append(current_segment) - - # Add last segment - last_segment = segments[-1] - result.append(last_segment) - - # Also include total duration if specified - if (total_duration is not None): - last_segment = result[-1] - - if (last_segment['end'] < total_duration): - last_segment = last_segment.copy() - last_segment['end'] = total_duration - result[-1] = last_segment - - return result - - def fill_gaps(self, segments: List[Dict[str, Any]], total_duration: float, max_expand_size: float = None): - result = [] - - if len(segments) == 0: - return result - - # Add gap at the beginning if needed - if (segments[0]['start'] > 0): - result.append({ 'start': 0, 'end': segments[0]['start'], 'gap': True } ) - - for i in range(len(segments) - 1): - expanded = False - current_segment = segments[i] - next_segment = segments[i + 1] - - delta = next_segment['start'] - current_segment['end'] - - if (max_expand_size is not None and delta <= max_expand_size): - # Just expand the current segment - current_segment = current_segment.copy() - current_segment['expand_amount'] = delta - current_segment['end'] = next_segment['start'] - expanded = True - - result.append(current_segment) - - # Add a gap to the next segment if needed - if (delta >= 0 and not expanded): - result.append({ 'start': current_segment['end'], 'end': next_segment['start'], 'gap': True } ) - - # Add last segment - last_segment = segments[-1] - result.append(last_segment) - - # Also include total duration if specified - if (total_duration is not None): - last_segment = result[-1] - - delta = total_duration - last_segment['end'] - - if (delta > 0): - if (max_expand_size is not None and delta <= max_expand_size): - # Expand the last segment - last_segment = last_segment.copy() - last_segment['expand_amount'] = delta - last_segment['end'] = total_duration - result[-1] = last_segment - else: - result.append({ 'start': last_segment['end'], 'end': total_duration, 'gap': True } ) - - return result - - def adjust_timestamp(self, segments: Iterator[dict], adjust_seconds: float, max_source_time: float = None): - result = [] - - for segment in segments: - segment_start = float(segment['start']) - segment_end = float(segment['end']) - - # Filter segments? - if (max_source_time is not None): - if (segment_start > max_source_time): - continue - segment_end = min(max_source_time, segment_end) - - new_segment = segment.copy() - - # Add to start and end - new_segment['start'] = segment_start + adjust_seconds - new_segment['end'] = segment_end + adjust_seconds - - # Handle words - if ('words' in new_segment): - for word in new_segment['words']: - # Adjust start and end - word['start'] = word['start'] + adjust_seconds - word['end'] = word['end'] + adjust_seconds - - result.append(new_segment) - return result - - def multiply_timestamps(self, timestamps: List[Dict[str, Any]], factor: float): - result = [] - - for entry in timestamps: - start = entry['start'] - end = entry['end'] - - result.append({ - 'start': start * factor, - 'end': end * factor - }) - return result - - -class VadSileroTranscription(AbstractTranscription): - def __init__(self, sampling_rate: int = 16000, cache: ModelCache = None): - super().__init__(sampling_rate=sampling_rate) - self.model = None - self.cache = cache - self._initialize_model() - - def _initialize_model(self): - if (self.cache is not None): - model_key = "VadSileroTranscription" - self.model, self.get_speech_timestamps = self.cache.get(model_key, self._create_model) - print("Loaded Silerio model from cache.") - else: - self.model, self.get_speech_timestamps = self._create_model() - print("Created Silerio model") - - def _create_model(self): - model, utils = torch.hub.load(repo_or_dir='snakers4/silero-vad', model='silero_vad') - - # Silero does not benefit from multi-threading - torch.set_num_threads(1) # JIT - (get_speech_timestamps, _, _, _, _) = utils - - return model, get_speech_timestamps - - def get_transcribe_timestamps(self, audio: str, config: TranscriptionConfig, start_time: float, end_time: float): - result = [] - - print("Getting timestamps from audio file: {}, start: {}, duration: {}".format(audio, start_time, end_time)) - perf_start_time = time.perf_counter() - - # Divide procesisng of audio into chunks - chunk_start = start_time - - while (chunk_start < end_time): - chunk_duration = min(end_time - chunk_start, VAD_MAX_PROCESSING_CHUNK) - - print("Processing VAD in chunk from {} to {}".format(format_timestamp(chunk_start), format_timestamp(chunk_start + chunk_duration))) - wav = self.get_audio_segment(audio, str(chunk_start), str(chunk_duration)) - - sample_timestamps = self.get_speech_timestamps(wav, self.model, sampling_rate=self.sampling_rate, threshold=SPEECH_TRESHOLD) - seconds_timestamps = self.multiply_timestamps(sample_timestamps, factor=1 / self.sampling_rate) - adjusted = self.adjust_timestamp(seconds_timestamps, adjust_seconds=chunk_start, max_source_time=chunk_start + chunk_duration) - - #pprint(adjusted) - - result.extend(adjusted) - chunk_start += chunk_duration - - perf_end_time = time.perf_counter() - print("VAD processing took {} seconds".format(perf_end_time - perf_start_time)) - - return result - - def __getstate__(self): - # We only need the sampling rate - return { 'sampling_rate': self.sampling_rate } - - def __setstate__(self, state): - self.sampling_rate = state['sampling_rate'] - self.model = None - # Use the global cache - self.cache = GLOBAL_MODEL_CACHE - self._initialize_model() - -# A very simple VAD that just marks every N seconds as speech -class VadPeriodicTranscription(AbstractTranscription): - def __init__(self, sampling_rate: int = 16000): - super().__init__(sampling_rate=sampling_rate) - - def is_transcribe_timestamps_fast(self): - # This is a very fast VAD - no need to parallelize it - return True - - def get_transcribe_timestamps(self, audio: str, config: PeriodicTranscriptionConfig, start_time: float, end_time: float): - result = [] - - # Generate a timestamp every N seconds - start_timestamp = start_time - - while (start_timestamp < end_time): - end_timestamp = min(start_timestamp + config.periodic_duration, end_time) - segment_duration = end_timestamp - start_timestamp - - # Minimum duration is 1 second - if (segment_duration >= 1): - result.append( { 'start': start_timestamp, 'end': end_timestamp } ) - - start_timestamp = end_timestamp - - return result - -def get_audio_duration(file: str): - return float(ffmpeg.probe(file)["format"]["duration"]) - -def load_audio(file: str, sample_rate: int = 16000, - start_time: str = None, duration: str = None): - """ - Open an audio file and read as mono waveform, resampling as necessary - - Parameters - ---------- - file: str - The audio file to open - - sr: int - The sample rate to resample the audio if necessary - - start_time: str - The start time, using the standard FFMPEG time duration syntax, or None to disable. - - duration: str - The duration, using the standard FFMPEG time duration syntax, or None to disable. - - Returns - ------- - A NumPy array containing the audio waveform, in float32 dtype. - """ - try: - inputArgs = {'threads': 0} - - if (start_time is not None): - inputArgs['ss'] = start_time - if (duration is not None): - inputArgs['t'] = duration - - # This launches a subprocess to decode audio while down-mixing and resampling as necessary. - # Requires the ffmpeg CLI and `ffmpeg-python` package to be installed. - out, _ = ( - ffmpeg.input(file, **inputArgs) - .output("-", format="s16le", acodec="pcm_s16le", ac=1, ar=sample_rate) - .run(cmd="ffmpeg", capture_stdout=True, capture_stderr=True) - ) - except ffmpeg.Error as e: - raise RuntimeError(f"Failed to load audio: {e.stderr.decode()}") - - return np.frombuffer(out, np.int16).flatten().astype(np.float32) / 32768.0 \ No newline at end of file diff --git a/spaces/dorkai/singpt-2.0/modules/deepspeed_parameters.py b/spaces/dorkai/singpt-2.0/modules/deepspeed_parameters.py deleted file mode 100644 index 3dbed437f5b5196d0b1fcbc582085319fb8d40d1..0000000000000000000000000000000000000000 --- a/spaces/dorkai/singpt-2.0/modules/deepspeed_parameters.py +++ /dev/null @@ -1,75 +0,0 @@ -def generate_ds_config(ds_bf16, train_batch_size, nvme_offload_dir): - - ''' - DeepSpeed configration - https://huggingface.co/docs/transformers/main_classes/deepspeed - ''' - - if nvme_offload_dir: - ds_config = { - "fp16": { - "enabled": not ds_bf16, - }, - "bf16": { - "enabled": ds_bf16, - }, - "zero_optimization": { - "stage": 3, - "offload_param": { - "device": "nvme", - "nvme_path": nvme_offload_dir, - "pin_memory": True, - "buffer_count": 5, - "buffer_size": 1e9, - "max_in_cpu": 1e9 - }, - "overlap_comm": True, - "reduce_bucket_size": "auto", - "contiguous_gradients": True, - "sub_group_size": 1e8, - "stage3_prefetch_bucket_size": "auto", - "stage3_param_persistence_threshold": "auto", - "stage3_max_live_parameters": "auto", - "stage3_max_reuse_distance": "auto", - }, - "aio": { - "block_size": 262144, - "queue_depth": 32, - "thread_count": 1, - "single_submit": False, - "overlap_events": True - }, - "steps_per_print": 2000, - "train_batch_size": train_batch_size, - "train_micro_batch_size_per_gpu": 1, - "wall_clock_breakdown": False - } - else: - ds_config = { - "fp16": { - "enabled": not ds_bf16, - }, - "bf16": { - "enabled": ds_bf16, - }, - "zero_optimization": { - "stage": 3, - "offload_param": { - "device": "cpu", - "pin_memory": True - }, - "overlap_comm": True, - "contiguous_gradients": True, - "reduce_bucket_size": "auto", - "stage3_prefetch_bucket_size": "auto", - "stage3_param_persistence_threshold": "auto", - "stage3_max_live_parameters": "auto", - "stage3_max_reuse_distance": "auto", - }, - "steps_per_print": 2000, - "train_batch_size": train_batch_size, - "train_micro_batch_size_per_gpu": 1, - "wall_clock_breakdown": False - } - - return ds_config diff --git a/spaces/duycse1603/math2tex/HybridViT/module/converter/tfm_converter.py b/spaces/duycse1603/math2tex/HybridViT/module/converter/tfm_converter.py deleted file mode 100644 index 4f18575f887608fd39cd0d67d483158111fb3ef9..0000000000000000000000000000000000000000 --- a/spaces/duycse1603/math2tex/HybridViT/module/converter/tfm_converter.py +++ /dev/null @@ -1,90 +0,0 @@ -import torch -import numpy as np - - -class TFMLabelConverter(object): - """ Convert between text-label and text-index """ - - list_token = ['[PAD]', '[GO]', '[s]', '[UNK]'] - def __init__(self, character, device): - list_character = character - self.character = TFMLabelConverter.list_token + list_character - - self.device = device - self.dict = {} - for i, char in enumerate(self.character): - self.dict[char] = i - self.ignore_idx = self.dict['[PAD]'] - - @staticmethod - def START() -> int: - return TFMLabelConverter.list_token.index('[GO]') - - @staticmethod - def END() -> int: - return TFMLabelConverter.list_token.index('[s]') - - @staticmethod - def UNK() -> int: - return TFMLabelConverter.list_token.index('[UNK]') - - @staticmethod - def PAD() -> int: - return TFMLabelConverter.list_token.index('[PAD]') - - def encode(self, text, batch_max_length=25): - length = [len(s) + 1 for s in text] # +1 for [s] at end of sentence. - batch_max_length += 1 - batch_text = torch.LongTensor(len(text), batch_max_length + 1).fill_(self.ignore_idx) - for i, t in enumerate(text): - text = list(t) - - if len(text) > batch_max_length: - text = text[:(batch_max_length-1)] - - text.append('[s]') - text = [self.dict[char] if char in self.dict else self.dict['[UNK]'] for char in text] - batch_text[i][0] = torch.LongTensor([self.dict['[GO]']]) - batch_text[i][1:1 + len(text)] = torch.LongTensor(text) # batch_text[:, 0] = [GO] token - return (batch_text.to(self.device), torch.IntTensor(length).to(self.device)) - - def decode(self, text_index, token_level='word'): - """ convert text-index into text-label. """ - texts = [] - batch_size = text_index.shape[0] - for index in range(batch_size): - if token_level == 'word': - text = ' '.join([self.character[i] for i in text_index[index, :]]) - else: - text = ''.join([self.character[i] for i in text_index[index, :]]) - texts.append(text) - return texts - - def detokenize(self, token_ids): - """convert token ids to list of token""" - b_toks = [] - for tok in token_ids: - toks = [] - for i in tok: - if self.character[i] == '[s]': - break - toks.append(self.character[i]) - b_toks.append(toks) - - return b_toks - -if __name__ == '__main__': - vocab = ['S', 'ố', ' ', '2', '5', '3', 'đ', 'ư', 'ờ', 'n', 'g', 'T', 'r', 'ầ', 'P', 'h', 'ú', ',', 'ị', 't', 'ấ', 'N', 'a', 'm', 'á', 'c', 'H', 'u', 'y', 'ệ', 'ả', 'i', 'D', 'ơ', '8', '9', 'Đ', 'B', 'ộ', 'L', 'ĩ', '6', 'Q', 'ậ', 'ì', 'ạ', 'ồ', 'C', 'í', 'M', '4', 'E', '/', 'K', 'p', '1', 'A', 'x', 'ặ', 'ễ', '0', 'â', 'à', 'ế', 'ừ', 'ê', '-', '7', 'o', 'V', 'ô', 'ã', 'G', 'ớ', 'Y', 'I', 'ề', 'ò', 'l', 'R', 'ỹ', 'ủ', 'X', "'", 'e', 'ắ', 'ổ', 'ằ', 'k', 's', '.', 'ợ', 'ù', 'ứ', 'ă', 'ỳ', 'ẵ', 'ý', 'ó', 'ẩ', 'ọ', 'J', 'ũ', 'ữ', 'ự', 'õ', 'ỉ', 'ỏ', 'v', 'd', 'Â', 'W', 'U', 'O', 'é', 'ở', 'ỷ', '(', ')', 'ử', 'è', 'ể', 'ụ', 'ỗ', 'F', 'q', 'ẻ', 'ỡ', 'b', 'ỵ', 'Ứ', '#', 'ẽ', 'Ô', 'Ê', 'Ơ', '+', 'z', 'Ấ', 'w', 'Z', '&', 'Á', '~', 'f', 'Ạ', 'Ắ', 'j', ':', 'Ă', '<', '>', 'ẹ', '_', 'À', 'Ị', 'Ư', 'Ễ'] - text = [ - "190B Trần Quang Khải, Phường Tân Định, Quận 1, TP Hồ Chí Minh", - "164/2B, Quốc lộ 1A, Phường Lê Bình, Quận Cái Răng, Cần Thơ", - "Cẩm Huy, Huyện Cẩm Xuyên, Hà Tĩnh" - ] - tfm_convert = TFMLabelConverter(vocab, 'cpu') - texts, lengths = tfm_convert.encode(text, 70) - print(texts) - for text in texts: - print('Encode', text) - text = text.unsqueeze(0) - decode_text = tfm_convert.decode(text, 'char') - print('Decode', decode_text) \ No newline at end of file diff --git a/spaces/edisonlee55/hysts-anime-face-detector/anime_face_detector/detector.py b/spaces/edisonlee55/hysts-anime-face-detector/anime_face_detector/detector.py deleted file mode 100644 index 9a44f23940bc6edf89a55ed4bed6f62a817c4ab8..0000000000000000000000000000000000000000 --- a/spaces/edisonlee55/hysts-anime-face-detector/anime_face_detector/detector.py +++ /dev/null @@ -1,147 +0,0 @@ -from __future__ import annotations - -import pathlib -import warnings -from typing import Optional, Union - -import cv2 -import mmcv -import numpy as np -import torch.nn as nn -from mmdet.apis import inference_detector, init_detector -from mmpose.apis import inference_top_down_pose_model, init_pose_model -from mmpose.datasets import DatasetInfo - - -class LandmarkDetector: - def __init__( - self, - landmark_detector_config_or_path: Union[mmcv.Config, str, - pathlib.Path], - landmark_detector_checkpoint_path: Union[str, pathlib.Path], - face_detector_config_or_path: Optional[Union[mmcv.Config, str, - pathlib.Path]] = None, - face_detector_checkpoint_path: Optional[Union[ - str, pathlib.Path]] = None, - device: str = 'cuda:0', - flip_test: bool = True, - box_scale_factor: float = 1.1): - landmark_config = self._load_config(landmark_detector_config_or_path) - self.dataset_info = DatasetInfo( - landmark_config.dataset_info) # type: ignore - face_detector_config = self._load_config(face_detector_config_or_path) - - self.landmark_detector = self._init_pose_model( - landmark_config, landmark_detector_checkpoint_path, device, - flip_test) - self.face_detector = self._init_face_detector( - face_detector_config, face_detector_checkpoint_path, device) - - self.box_scale_factor = box_scale_factor - - @staticmethod - def _load_config( - config_or_path: Optional[Union[mmcv.Config, str, pathlib.Path]] - ) -> Optional[mmcv.Config]: - if config_or_path is None or isinstance(config_or_path, mmcv.Config): - return config_or_path - return mmcv.Config.fromfile(config_or_path) - - @staticmethod - def _init_pose_model(config: mmcv.Config, - checkpoint_path: Union[str, pathlib.Path], - device: str, flip_test: bool) -> nn.Module: - if isinstance(checkpoint_path, pathlib.Path): - checkpoint_path = checkpoint_path.as_posix() - model = init_pose_model(config, checkpoint_path, device=device) - model.cfg.model.test_cfg.flip_test = flip_test - return model - - @staticmethod - def _init_face_detector(config: Optional[mmcv.Config], - checkpoint_path: Optional[Union[str, - pathlib.Path]], - device: str) -> Optional[nn.Module]: - if config is not None: - if isinstance(checkpoint_path, pathlib.Path): - checkpoint_path = checkpoint_path.as_posix() - model = init_detector(config, checkpoint_path, device=device) - else: - model = None - return model - - def _detect_faces(self, image: np.ndarray) -> list[np.ndarray]: - # predicted boxes using mmdet model have the format of - # [x0, y0, x1, y1, score] - boxes = inference_detector(self.face_detector, image)[0] - # scale boxes by `self.box_scale_factor` - boxes = self._update_pred_box(boxes) - return boxes - - def _update_pred_box(self, pred_boxes: np.ndarray) -> list[np.ndarray]: - boxes = [] - for pred_box in pred_boxes: - box = pred_box[:4] - size = box[2:] - box[:2] + 1 - new_size = size * self.box_scale_factor - center = (box[:2] + box[2:]) / 2 - tl = center - new_size / 2 - br = tl + new_size - pred_box[:4] = np.concatenate([tl, br]) - boxes.append(pred_box) - return boxes - - def _detect_landmarks( - self, image: np.ndarray, - boxes: list[dict[str, np.ndarray]]) -> list[dict[str, np.ndarray]]: - preds, _ = inference_top_down_pose_model( - self.landmark_detector, - image, - boxes, - format='xyxy', - dataset_info=self.dataset_info, - return_heatmap=False) - return preds - - @staticmethod - def _load_image( - image_or_path: Union[np.ndarray, str, pathlib.Path]) -> np.ndarray: - if isinstance(image_or_path, np.ndarray): - image = image_or_path - elif isinstance(image_or_path, str): - image = cv2.imread(image_or_path) - elif isinstance(image_or_path, pathlib.Path): - image = cv2.imread(image_or_path.as_posix()) - else: - raise ValueError - return image - - def __call__( - self, - image_or_path: Union[np.ndarray, str, pathlib.Path], - boxes: Optional[list[np.ndarray]] = None - ) -> list[dict[str, np.ndarray]]: - """Detect face landmarks. - - Args: - image_or_path: An image with BGR channel order or an image path. - boxes: A list of bounding boxes for faces. Each bounding box - should be of the form [x0, y0, x1, y1, [score]]. - - Returns: A list of detection results. Each detection result has - bounding box of the form [x0, y0, x1, y1, [score]], and landmarks - of the form [x, y, score]. - """ - image = self._load_image(image_or_path) - if boxes is None: - if self.face_detector is not None: - boxes = self._detect_faces(image) - else: - warnings.warn( - 'Neither the face detector nor the bounding box is ' - 'specified. So the entire image is treated as the face ' - 'region.') - h, w = image.shape[:2] - boxes = [np.array([0, 0, w - 1, h - 1, 1])] - box_list = [{'bbox': box} for box in boxes] - return self._detect_landmarks(image, box_list) diff --git a/spaces/editing-images/ledits/app.py b/spaces/editing-images/ledits/app.py deleted file mode 100644 index d51503f6eeec6183bd0bde0303e4d263df260c8e..0000000000000000000000000000000000000000 --- a/spaces/editing-images/ledits/app.py +++ /dev/null @@ -1,873 +0,0 @@ -import gradio as gr -import torch -import numpy as np -import requests -import random -from io import BytesIO -from utils import * -from constants import * -from inversion_utils import * -from modified_pipeline_semantic_stable_diffusion import SemanticStableDiffusionPipeline -from torch import autocast, inference_mode -from diffusers import StableDiffusionPipeline -from diffusers import DDIMScheduler -from transformers import AutoProcessor, BlipForConditionalGeneration -from share_btn import community_icon_html, loading_icon_html, share_js - -# load pipelines -sd_model_id = "stabilityai/stable-diffusion-2-1-base" -device = torch.device("cuda" if torch.cuda.is_available() else "cpu") -sd_pipe = StableDiffusionPipeline.from_pretrained(sd_model_id,torch_dtype=torch.float16).to(device) -sd_pipe.scheduler = DDIMScheduler.from_config(sd_model_id, subfolder = "scheduler") -sem_pipe = SemanticStableDiffusionPipeline.from_pretrained(sd_model_id, torch_dtype=torch.float16).to(device) -blip_processor = AutoProcessor.from_pretrained("Salesforce/blip-image-captioning-base") -blip_model = BlipForConditionalGeneration.from_pretrained("Salesforce/blip-image-captioning-base",torch_dtype=torch.float16).to(device) - - - -## IMAGE CPATIONING ## -def caption_image(input_image): - inputs = blip_processor(images=input_image, return_tensors="pt").to(device, torch.float16) - pixel_values = inputs.pixel_values - - generated_ids = blip_model.generate(pixel_values=pixel_values, max_length=50) - generated_caption = blip_processor.batch_decode(generated_ids, skip_special_tokens=True)[0] - return generated_caption, generated_caption - - - -## DDPM INVERSION AND SAMPLING ## -def invert(x0, prompt_src="", num_diffusion_steps=100, cfg_scale_src = 3.5, eta = 1): - - # inverts a real image according to Algorihm 1 in https://arxiv.org/pdf/2304.06140.pdf, - # based on the code in https://github.com/inbarhub/DDPM_inversion - - # returns wt, zs, wts: - # wt - inverted latent - # wts - intermediate inverted latents - # zs - noise maps - - sd_pipe.scheduler.set_timesteps(num_diffusion_steps) - - # vae encode image - with inference_mode(): - w0 = (sd_pipe.vae.encode(x0).latent_dist.mode() * 0.18215) - - # find Zs and wts - forward process - wt, zs, wts = inversion_forward_process(sd_pipe, w0, etas=eta, prompt=prompt_src, cfg_scale=cfg_scale_src, prog_bar=True, num_inference_steps=num_diffusion_steps) - return zs, wts - - -def sample(zs, wts, prompt_tar="", cfg_scale_tar=15, skip=36, eta = 1): - - # reverse process (via Zs and wT) - w0, _ = inversion_reverse_process(sd_pipe, xT=wts[skip], etas=eta, prompts=[prompt_tar], cfg_scales=[cfg_scale_tar], prog_bar=True, zs=zs[skip:]) - - # vae decode image - with inference_mode(): - x0_dec = sd_pipe.vae.decode(1 / 0.18215 * w0).sample - if x0_dec.dim()<4: - x0_dec = x0_dec[None,:,:,:] - img = image_grid(x0_dec) - return img - - -def reconstruct(tar_prompt, - image_caption, - tar_cfg_scale, - skip, - wts, zs, - do_reconstruction, - reconstruction, - reconstruct_button - ): - - if reconstruct_button == "Hide Reconstruction": - return reconstruction.value, reconstruction, ddpm_edited_image.update(visible=False), do_reconstruction, "Show Reconstruction" - - else: - if do_reconstruction: - if image_caption.lower() == tar_prompt.lower(): # if image caption was not changed, run actual reconstruction - tar_prompt = "" - reconstruction_img = sample(zs.value, wts.value, prompt_tar=tar_prompt, skip=skip, cfg_scale_tar=tar_cfg_scale) - reconstruction = gr.State(value=reconstruction_img) - do_reconstruction = False - return reconstruction.value, reconstruction, ddpm_edited_image.update(visible=True), do_reconstruction, "Hide Reconstruction" - - -def load_and_invert( - input_image, - do_inversion, - seed, randomize_seed, - wts, zs, - src_prompt ="", - tar_prompt="", - steps=100, - src_cfg_scale = 3.5, - skip=36, - tar_cfg_scale=15, - progress=gr.Progress(track_tqdm=True) - -): - - - x0 = load_512(input_image, device=device).to(torch.float16) - - if do_inversion or randomize_seed: - # invert and retrieve noise maps and latent - zs_tensor, wts_tensor = invert(x0 =x0 , prompt_src=src_prompt, num_diffusion_steps=steps, cfg_scale_src=src_cfg_scale) - wts = gr.State(value=wts_tensor) - zs = gr.State(value=zs_tensor) - do_inversion = False - - return wts, zs, do_inversion, inversion_progress.update(visible=False) - -## SEGA ## - -def edit(input_image, - wts, zs, - tar_prompt, - image_caption, - steps, - skip, - tar_cfg_scale, - edit_concept_1,edit_concept_2,edit_concept_3, - guidnace_scale_1,guidnace_scale_2,guidnace_scale_3, - warmup_1, warmup_2, warmup_3, - neg_guidance_1, neg_guidance_2, neg_guidance_3, - threshold_1, threshold_2, threshold_3, - do_reconstruction, - reconstruction, - - # for inversion in case it needs to be re computed (and avoid delay): - do_inversion, - seed, - randomize_seed, - src_prompt, - src_cfg_scale): - show_share_button = gr.update(visible=True) - if do_inversion or randomize_seed: - x0 = load_512(input_image, device=device).to(torch.float16) - # invert and retrieve noise maps and latent - zs_tensor, wts_tensor = invert(x0 =x0 , prompt_src=src_prompt, num_diffusion_steps=steps, cfg_scale_src=src_cfg_scale) - wts = gr.State(value=wts_tensor) - zs = gr.State(value=zs_tensor) - do_inversion = False - - if image_caption.lower() == tar_prompt.lower(): # if image caption was not changed, run pure sega - tar_prompt = "" - - if edit_concept_1 != "" or edit_concept_2 != "" or edit_concept_3 != "": - editing_args = dict( - editing_prompt = [edit_concept_1,edit_concept_2,edit_concept_3], - reverse_editing_direction = [ neg_guidance_1, neg_guidance_2, neg_guidance_3,], - edit_warmup_steps=[warmup_1, warmup_2, warmup_3,], - edit_guidance_scale=[guidnace_scale_1,guidnace_scale_2,guidnace_scale_3], - edit_threshold=[threshold_1, threshold_2, threshold_3], - edit_momentum_scale=0.3, - edit_mom_beta=0.6, - eta=1,) - - latnets = wts.value[skip].expand(1, -1, -1, -1) - sega_out = sem_pipe(prompt=tar_prompt, latents=latnets, guidance_scale = tar_cfg_scale, - num_images_per_prompt=1, - num_inference_steps=steps, - use_ddpm=True, wts=wts.value, zs=zs.value[skip:], **editing_args) - - return sega_out.images[0], reconstruct_button.update(visible=True), do_reconstruction, reconstruction, wts, zs, do_inversion, show_share_button - - else: # if sega concepts were not added, performs regular ddpm sampling - - if do_reconstruction: # if ddpm sampling wasn't computed - pure_ddpm_img = sample(zs.value, wts.value, prompt_tar=tar_prompt, skip=skip, cfg_scale_tar=tar_cfg_scale) - reconstruction = gr.State(value=pure_ddpm_img) - do_reconstruction = False - return pure_ddpm_img, reconstruct_button.update(visible=False), do_reconstruction, reconstruction, wts, zs, do_inversion, show_share_button - - return reconstruction.value, reconstruct_button.update(visible=False), do_reconstruction, reconstruction, wts, zs, do_inversion, show_share_button - - -def randomize_seed_fn(seed, randomize_seed): - if randomize_seed: - seed = random.randint(0, np.iinfo(np.int32).max) - torch.manual_seed(seed) - return seed - -def crop_image(image): - h, w, c = image.shape - if h < w: - offset = (w - h) // 2 - image = image[:, offset:offset + h] - elif w < h: - offset = (h - w) // 2 - image = image[offset:offset + w] - image = np.array(Image.fromarray(image).resize((512, 512))) - return image - - -def get_example(): - case = [ - [ - 'examples/lemons_input.jpg', - # '', - 'apples', 'lemons', - 'a ceramic bowl', - 'examples/lemons_output.jpg', - - - 7,7, - 1,1, - False, True, - 100, - 36, - 15, - - ], - [ - 'examples/girl_with_pearl_earring_input.png', - # '', - 'glasses', '', - '', - 'examples/girl_with_pearl_earring_output.png', - - - 3,7, - 3,2, - False,False, - 100, - 36, - 15, - - ], - [ - 'examples/rockey_shore_input.jpg', - # '', - 'sea turtle', '', - 'watercolor painting', - 'examples/rockey_shore_output.jpg', - - - 7,7, - 1,2, - False,False, - 100, - 36, - 15, - ], - [ - 'examples/flower_field_input.jpg', - # '', - 'wheat', 'red flowers', - 'oil painting', - 'examples/flower_field_output_2.jpg', - - - 20,7, - 1,1, - False,True, - 100, - 36, - 15, - - ], - [ - 'examples/butterfly_input.jpg', - # '', - 'bee', 'butterfly', - 'oil painting', - 'examples/butterfly_output.jpg', - 7, 7, - 1,1, - False, True, - 100, - 36, - 15, - ] - ] - return case - - -def swap_visibilities(input_image, - edit_concept_1, - edit_concept_2, - tar_prompt, - sega_edited_image, - guidnace_scale_1, - guidnace_scale_2, - warmup_1, - warmup_2, - neg_guidance_1, - neg_guidance_2, - steps, - skip, - tar_cfg_scale, - sega_concepts_counter - -): - sega_concepts_counter=0 - concept1_update = update_display_concept("Remove" if neg_guidance_1 else "Add", edit_concept_1, neg_guidance_1, sega_concepts_counter) - if(edit_concept_2 != ""): - concept2_update = update_display_concept("Remove" if neg_guidance_2 else "Add", edit_concept_2, neg_guidance_2, sega_concepts_counter+1) - else: - concept2_update = gr.update(visible=False), gr.update(visible=False),gr.update(visible=False), gr.update(value=neg_guidance_2),gr.update(visible=True),gr.update(visible=False),sega_concepts_counter+1 - - return (gr.update(visible=True), *concept1_update[:-1], *concept2_update) - - - -######## -# demo # -######## - - -intro = """ -

                                                                                                                                                - LEDITS - Pipeline for editing images -

                                                                                                                                                -

                                                                                                                                                - Real Image Latent Editing with Edit Friendly DDPM and Semantic Guidance -

                                                                                                                                                -

                                                                                                                                                - Project Page | ArXiv -

                                                                                                                                                - -

                                                                                                                                                - -Duplicate Space -

                                                                                                                                                """ - -help_text = """ -- **Getting Started - edit images with DDPM X SEGA:** - - The are 3 general setting options you can play with - - - 1. **Pure DDPM Edit -** Describe the desired edited output image in detail - 2. **Pure SEGA Edit -** Keep the target prompt empty ***or*** with a description of the original image and add editing concepts for Semantic Gudiance editing - 3. **Combined -** Describe the desired edited output image in detail and add additional SEGA editing concepts on top -- **Getting Started - Tips** - - While the best approach depends on your editing objective and source image, we can layout a few guiding tips to use as a starting point - - - 1. **DDPM** is usually more suited for scene/style changes and major subject changes (for example ) while **SEGA** allows for more fine grained control, changes are more delicate, more suited for adding details (for example facial expressions and attributes, subtle style modifications, object adding/removing) - 2. The more you describe the scene in the target prompt (both the parts and details you wish to keep the same and those you wish to change), the better the result - 3. **Combining DDPM Edit with SEGA -** - Try dividing your editing objective to more significant scene/style/subject changes and detail adding/removing and more moderate changes. Then describe the major changes in a detailed target prompt and add the more fine grained details as SEGA concepts. - 4. **Reconstruction:** Using an empty source prompt + target prompt will lead to a perfect reconstruction -- **Fidelity vs creativity**: - - Bigger values → more fidelity, smaller values → more creativity - - 1. `Skip Steps` - 2. `Warmup` (SEGA) - 3. `Threshold` (SEGA) - - Bigger values → more creativity, smaller values → more fidelity - - 1. `Guidance Scale` - 2. `Concept Guidance Scale` (SEGA) -""" - -with gr.Blocks(css="style.css") as demo: - def update_counter(sega_concepts_counter, concept1, concept2, concept3): - if sega_concepts_counter == "": - sega_concepts_counter = sum(1 for concept in (concept1, concept2, concept3) if concept != '') - return sega_concepts_counter - def remove_concept(sega_concepts_counter, row_triggered): - sega_concepts_counter -= 1 - rows_visibility = [gr.update(visible=False) for _ in range(4)] - - if(row_triggered-1 > sega_concepts_counter): - rows_visibility[sega_concepts_counter] = gr.update(visible=True) - else: - rows_visibility[row_triggered-1] = gr.update(visible=True) - - row1_visibility, row2_visibility, row3_visibility, row4_visibility = rows_visibility - - guidance_scale_label = "Concept Guidance Scale" - # enable_interactive = gr.update(interactive=True) - return (gr.update(visible=False), - gr.update(visible=False, value="",), - gr.update(interactive=True, value=""), - gr.update(visible=False,label = guidance_scale_label), - gr.update(interactive=True, value =False), - gr.update(value=DEFAULT_WARMUP_STEPS), - gr.update(value=DEFAULT_THRESHOLD), - gr.update(visible=True), - gr.update(interactive=True, value="custom"), - row1_visibility, - row2_visibility, - row3_visibility, - row4_visibility, - sega_concepts_counter - ) - - - - def update_display_concept(button_label, edit_concept, neg_guidance, sega_concepts_counter): - sega_concepts_counter += 1 - guidance_scale_label = "Concept Guidance Scale" - if(button_label=='Remove'): - neg_guidance = True - guidance_scale_label = "Negative Guidance Scale" - - return (gr.update(visible=True), #boxn - gr.update(visible=True, value=edit_concept), #concept_n - gr.update(visible=True,label = guidance_scale_label), #guidance_scale_n - gr.update(value=neg_guidance),#neg_guidance_n - gr.update(visible=False), #row_n - gr.update(visible=True), #row_n+1 - sega_concepts_counter - ) - - - def display_editing_options(run_button, clear_button, sega_tab): - return run_button.update(visible=True), clear_button.update(visible=True), sega_tab.update(visible=True) - - def update_interactive_mode(add_button_label): - if add_button_label == "Clear": - return gr.update(interactive=False), gr.update(interactive=False) - else: - return gr.update(interactive=True), gr.update(interactive=True) - - def update_dropdown_parms(dropdown): - if dropdown == 'custom': - return DEFAULT_SEGA_CONCEPT_GUIDANCE_SCALE,DEFAULT_WARMUP_STEPS, DEFAULT_THRESHOLD - elif dropdown =='style': - return STYLE_SEGA_CONCEPT_GUIDANCE_SCALE,STYLE_WARMUP_STEPS, STYLE_THRESHOLD - elif dropdown =='object': - return OBJECT_SEGA_CONCEPT_GUIDANCE_SCALE,OBJECT_WARMUP_STEPS, OBJECT_THRESHOLD - elif dropdown =='faces': - return FACE_SEGA_CONCEPT_GUIDANCE_SCALE,FACE_WARMUP_STEPS, FACE_THRESHOLD - - - def reset_do_inversion(): - return True - - def reset_do_reconstruction(): - do_reconstruction = True - return do_reconstruction - - def reset_image_caption(): - return "" - - def update_inversion_progress_visibility(input_image, do_inversion): - if do_inversion and not input_image is None: - return inversion_progress.update(visible=True) - else: - return inversion_progress.update(visible=False) - - def update_edit_progress_visibility(input_image, do_inversion): - # if do_inversion and not input_image is None: - # return inversion_progress.update(visible=True) - # else: - return inversion_progress.update(visible=True) - - - gr.HTML(intro) - wts = gr.State() - zs = gr.State() - reconstruction = gr.State() - do_inversion = gr.State(value=True) - do_reconstruction = gr.State(value=True) - sega_concepts_counter = gr.State(0) - image_caption = gr.State(value="") - - with gr.Row(): - input_image = gr.Image(label="Input Image", interactive=True, elem_id="input_image") - ddpm_edited_image = gr.Image(label=f"Pure DDPM Inversion Image", interactive=False, visible=False) - sega_edited_image = gr.Image(label=f"LEDITS Edited Image", interactive=False, elem_id="output_image") - input_image.style(height=365, width=365) - ddpm_edited_image.style(height=365, width=365) - sega_edited_image.style(height=365, width=365) - - with gr.Group(visible=False) as share_btn_container: - with gr.Group(elem_id="share-btn-container"): - community_icon = gr.HTML(community_icon_html, visible=True) - loading_icon = gr.HTML(loading_icon_html, visible=False) - share_button = gr.Button("Share to community", elem_id="share-btn", visible=True) - - with gr.Row(): - with gr.Box(visible=False, elem_id="box1") as box1: - with gr.Row(): - concept_1 = gr.Button(scale=3, value="") - remove_concept1 = gr.Button("x", scale=1, min_width=10) - with gr.Row(): - guidnace_scale_1 = gr.Slider(label='Concept Guidance Scale', minimum=1, maximum=30, - info="How strongly the concept should modify the image", - value=DEFAULT_SEGA_CONCEPT_GUIDANCE_SCALE, - step=0.5, interactive=True) - with gr.Box(visible=False, elem_id="box2") as box2: - with gr.Row(): - concept_2 = gr.Button(scale=3, value="") - remove_concept2 = gr.Button("x", scale=1, min_width=10) - with gr.Row(): - guidnace_scale_2 = gr.Slider(label='Concept Guidance Scale', minimum=1, maximum=30, - info="How strongly the concept should modify the image", - value=DEFAULT_SEGA_CONCEPT_GUIDANCE_SCALE, - step=0.5, interactive=True) - with gr.Box(visible=False, elem_id="box3") as box3: - with gr.Row(): - concept_3 = gr.Button(scale=3, value="") - remove_concept3 = gr.Button("x", scale=1, min_width=10) - with gr.Row(): - guidnace_scale_3 = gr.Slider(label='Concept Guidance Scale', minimum=1, maximum=30, - info="How strongly the concept should modify the image", - value=DEFAULT_SEGA_CONCEPT_GUIDANCE_SCALE, - step=0.5, interactive=True) - - - with gr.Row(): - inversion_progress = gr.Textbox(visible=False, label="Inversion progress") - - with gr.Box(): - intro_segs = gr.Markdown("Add/Remove Concepts from your Image with Semantic Guidance") - # 1st SEGA concept - with gr.Row().style(mobile_collapse=False) as row1: - with gr.Column(scale=3, min_width=100): - with gr.Row().style(mobile_collapse=True): - # with gr.Column(scale=3, min_width=100): - edit_concept_1 = gr.Textbox( - label="Concept", - show_label=True, - max_lines=1, value="", - placeholder="E.g.: Sunglasses", - ) - # with gr.Column(scale=2, min_width=100):# better mobile ui - dropdown1 = gr.Dropdown(label = "Edit Type", value ='custom' , choices=['custom','style', 'object', 'faces']) - - - with gr.Column(scale=1, min_width=100, visible=False): - neg_guidance_1 = gr.Checkbox( - label='Remove Concept?') - - with gr.Column(scale=1, min_width=100): - with gr.Row().style(mobile_collapse=False): # better mobile ui - with gr.Column(): - add_1 = gr.Button('Add') - remove_1 = gr.Button('Remove') - - - # 2nd SEGA concept - with gr.Row(visible=False).style(equal_height=True) as row2: - with gr.Column(scale=3, min_width=100): - with gr.Row().style(mobile_collapse=True): #better mobile UI - # with gr.Column(scale=3, min_width=100): - edit_concept_2 = gr.Textbox( - label="Concept", - show_label=True, - max_lines=1, - placeholder="E.g.: Realistic", - ) - # with gr.Column(scale=2, min_width=100):# better mobile ui - dropdown2 = gr.Dropdown(label = "Edit Type", value ='custom' , choices=['custom','style', 'object', 'faces']) - - with gr.Column(scale=1, min_width=100, visible=False): - neg_guidance_2 = gr.Checkbox( - label='Remove Concept?') - - with gr.Column(scale=1, min_width=100): - with gr.Row().style(mobile_collapse=False): # better mobile ui - with gr.Column(): - add_2 = gr.Button('Add') - remove_2 = gr.Button('Remove') - - # 3rd SEGA concept - with gr.Row(visible=False).style(equal_height=True) as row3: - with gr.Column(scale=3, min_width=100): - with gr.Row().style(mobile_collapse=True): #better mobile UI - # with gr.Column(scale=3, min_width=100): - edit_concept_3 = gr.Textbox( - label="Concept", - show_label=True, - max_lines=1, - placeholder="E.g.: orange", - ) - # with gr.Column(scale=2, min_width=100): - dropdown3 = gr.Dropdown(label = "Edit Type", value ='custom' , choices=['custom','style', 'object', 'faces']) - - with gr.Column(scale=1, min_width=100, visible=False): - neg_guidance_3 = gr.Checkbox( - label='Remove Concept?',visible=True) - - with gr.Column(scale=1, min_width=100): - with gr.Row().style(mobile_collapse=False): # better mobile ui - with gr.Column(): - add_3 = gr.Button('Add') - remove_3 = gr.Button('Remove') - - with gr.Row(visible=False).style(equal_height=True) as row4: - gr.Markdown("### Max of 3 concepts reached. Remove a concept to add more") - - #with gr.Row(visible=False).style(mobile_collapse=False, equal_height=True): - # add_concept_button = gr.Button("+1 concept") - - - - with gr.Row().style(mobile_collapse=False, equal_height=True): - tar_prompt = gr.Textbox( - label="Describe your edited image (optional)", - elem_id="target_prompt", - # show_label=False, - max_lines=1, value="", scale=3, - placeholder="Target prompt, DDPM Inversion", info = "DDPM Inversion Prompt. Can help with global changes, modify to what you would like to see" - ) - # caption_button = gr.Button("Caption Image", scale=1) - - - with gr.Row(): - run_button = gr.Button("Edit your image!", visible=True) - - - with gr.Accordion("Advanced Options", open=False): - with gr.Tabs() as tabs: - - with gr.TabItem('General options', id=2): - with gr.Row(): - with gr.Column(min_width=100): - clear_button = gr.Button("Clear", visible=True) - src_prompt = gr.Textbox(lines=1, label="Source Prompt", interactive=True, placeholder="") - steps = gr.Number(value=100, precision=0, label="Num Diffusion Steps", interactive=True) - src_cfg_scale = gr.Number(value=3.5, label=f"Source Guidance Scale", interactive=True) - - - with gr.Column(min_width=100): - reconstruct_button = gr.Button("Show Reconstruction", visible=False) - skip = gr.Slider(minimum=0, maximum=60, value=36, step=1, label="Skip Steps", interactive=True, info = "At which step to start denoising. Bigger values increase fidelity to input image") - tar_cfg_scale = gr.Slider(minimum=1, maximum=30,value=15, label=f"Guidance Scale", interactive=True) - seed = gr.Number(value=0, precision=0, label="Seed", interactive=True) - randomize_seed = gr.Checkbox(label='Randomize seed', value=False) - - with gr.TabItem('SEGA options', id=3) as sega_advanced_tab: - # 1st SEGA concept - gr.Markdown("1st concept") - with gr.Row().style(mobile_collapse=False, equal_height=True): - warmup_1 = gr.Slider(label='Warmup', minimum=0, maximum=50, - value=DEFAULT_WARMUP_STEPS, - step=1, interactive=True, info="At which step to start applying semantic guidance. Bigger values reduce edit concept's effect") - threshold_1 = gr.Slider(label='Threshold', minimum=0.5, maximum=0.99, - value=DEFAULT_THRESHOLD, step=0.01, interactive=True, - info = "Lower the threshold for more effect (e.g. ~0.9 for style transfer)") - - # 2nd SEGA concept - gr.Markdown("2nd concept") - with gr.Row() as row2_advanced: - warmup_2 = gr.Slider(label='Warmup', minimum=0, maximum=50, - value=DEFAULT_WARMUP_STEPS, - step=1, interactive=True, info="At which step to start applying semantic guidance. Bigger values reduce edit concept's effect") - threshold_2 = gr.Slider(label='Threshold', minimum=0.5, maximum=0.99, - value=DEFAULT_THRESHOLD, - step=0.01, interactive=True, - info = "Lower the threshold for more effect (e.g. ~0.9 for style transfer)") - # 3rd SEGA concept - gr.Markdown("3rd concept") - with gr.Row() as row3_advanced: - warmup_3 = gr.Slider(label='Warmup', minimum=0, maximum=50, - value=DEFAULT_WARMUP_STEPS, step=1, - interactive=True, info="At which step to start applying semantic guidance. Bigger values reduce edit concept's effect") - threshold_3 = gr.Slider(label='Threshold', minimum=0.5, maximum=0.99, - value=DEFAULT_THRESHOLD, step=0.01, - interactive=True, - info = "Lower the threshold for more effect (e.g. ~0.9 for style transfer)") - - # caption_button.click( - # fn = caption_image, - # inputs = [input_image], - # outputs = [tar_prompt] - # ) - #neg_guidance_1.change(fn = update_label, inputs=[neg_guidance_1], outputs=[add_1]) - #neg_guidance_2.change(fn = update_label, inputs=[neg_guidance_2], outputs=[add_2]) - #neg_guidance_3.change(fn = update_label, inputs=[neg_guidance_3], outputs=[add_3]) - add_1.click(fn=update_counter, - inputs=[sega_concepts_counter,edit_concept_1,edit_concept_2,edit_concept_3], - outputs=sega_concepts_counter,queue=False).then(fn = update_display_concept, inputs=[add_1, edit_concept_1, neg_guidance_1, sega_concepts_counter], outputs=[box1, concept_1, guidnace_scale_1,neg_guidance_1,row1, row2, sega_concepts_counter],queue=False) - add_2.click(fn=update_counter,inputs=[sega_concepts_counter,edit_concept_1,edit_concept_2,edit_concept_3], outputs=sega_concepts_counter,queue=False).then(fn = update_display_concept, inputs=[add_2, edit_concept_2, neg_guidance_2, sega_concepts_counter], outputs=[box2, concept_2, guidnace_scale_2,neg_guidance_2,row2, row3, sega_concepts_counter],queue=False) - add_3.click(fn=update_counter,inputs=[sega_concepts_counter,edit_concept_1,edit_concept_2,edit_concept_3], outputs=sega_concepts_counter,queue=False).then(fn = update_display_concept, inputs=[add_3, edit_concept_3, neg_guidance_3, sega_concepts_counter], outputs=[box3, concept_3, guidnace_scale_3,neg_guidance_3,row3, row4, sega_concepts_counter],queue=False) - - remove_1.click(fn = update_display_concept, inputs=[remove_1, edit_concept_1, neg_guidance_1, sega_concepts_counter], outputs=[box1, concept_1, guidnace_scale_1,neg_guidance_1,row1, row2, sega_concepts_counter],queue=False) - remove_2.click(fn = update_display_concept, inputs=[remove_2, edit_concept_2, neg_guidance_2 ,sega_concepts_counter], outputs=[box2, concept_2, guidnace_scale_2,neg_guidance_2,row2, row3,sega_concepts_counter],queue=False) - remove_3.click(fn = update_display_concept, inputs=[remove_3, edit_concept_3, neg_guidance_3, sega_concepts_counter], outputs=[box3, concept_3, guidnace_scale_3,neg_guidance_3, row3, row4, sega_concepts_counter],queue=False) - - remove_concept1.click( - fn=update_counter,inputs=[sega_concepts_counter,edit_concept_1,edit_concept_2,edit_concept_3], outputs=sega_concepts_counter,queue=False).then( - fn = remove_concept, inputs=[sega_concepts_counter,gr.State(1)], outputs= [box1, concept_1, edit_concept_1, guidnace_scale_1,neg_guidance_1,warmup_1, threshold_1, add_1, dropdown1, row1, row2, row3, row4, sega_concepts_counter],queue=False) - remove_concept2.click( - fn=update_counter,inputs=[sega_concepts_counter,edit_concept_1,edit_concept_2,edit_concept_3], outputs=sega_concepts_counter,queue=False).then( - fn = remove_concept, inputs=[sega_concepts_counter,gr.State(2)], outputs=[box2, concept_2, edit_concept_2, guidnace_scale_2,neg_guidance_2, warmup_2, threshold_2, add_2 , dropdown2, row1, row2, row3, row4, sega_concepts_counter],queue=False) - remove_concept3.click( - fn=update_counter,inputs=[sega_concepts_counter,edit_concept_1,edit_concept_2,edit_concept_3], outputs=sega_concepts_counter,queue=False).then( - fn = remove_concept,inputs=[sega_concepts_counter,gr.State(3)], outputs=[box3, concept_3, edit_concept_3, guidnace_scale_3,neg_guidance_3,warmup_3, threshold_3, add_3, dropdown3, row1, row2, row3, row4, sega_concepts_counter],queue=False) - - #add_concept_button.click(fn = update_display_concept, inputs=sega_concepts_counter, - # outputs= [row2, row2_advanced, row3, row3_advanced, add_concept_button, sega_concepts_counter], queue = False) - - run_button.click( - fn=edit, - inputs=[input_image, - wts, zs, - tar_prompt, - image_caption, - steps, - skip, - tar_cfg_scale, - edit_concept_1,edit_concept_2,edit_concept_3, - guidnace_scale_1,guidnace_scale_2,guidnace_scale_3, - warmup_1, warmup_2, warmup_3, - neg_guidance_1, neg_guidance_2, neg_guidance_3, - threshold_1, threshold_2, threshold_3, do_reconstruction, reconstruction, - do_inversion, - seed, - randomize_seed, - src_prompt, - src_cfg_scale - - - ], - outputs=[sega_edited_image, reconstruct_button, do_reconstruction, reconstruction, wts, zs, do_inversion, share_btn_container]) - # .success(fn=update_gallery_display, inputs= [prev_output_image, sega_edited_image], outputs = [gallery, gallery, prev_output_image]) - - - input_image.change( - fn = reset_do_inversion, - outputs = [do_inversion], - queue = False).then( - fn = randomize_seed_fn, - inputs = [seed, randomize_seed], - outputs = [seed], queue = False) - # Automatically start inverting upon input_image change - input_image.upload(fn = crop_image, inputs = [input_image], outputs = [input_image],queue=False).then( - fn = reset_do_inversion, - outputs = [do_inversion], - queue = False).then( - fn = randomize_seed_fn, - inputs = [seed, randomize_seed], - outputs = [seed], queue = False).then(fn = caption_image, - inputs = [input_image], - outputs = [tar_prompt, image_caption]).then(fn = update_inversion_progress_visibility, inputs =[input_image,do_inversion], - outputs=[inversion_progress],queue=False).then( - fn=load_and_invert, - inputs=[input_image, - do_inversion, - seed, randomize_seed, - wts, zs, - src_prompt, - tar_prompt, - steps, - src_cfg_scale, - skip, - tar_cfg_scale, - ], - # outputs=[ddpm_edited_image, wts, zs, do_inversion], - outputs=[wts, zs, do_inversion, inversion_progress], - ).then(fn = update_inversion_progress_visibility, inputs =[input_image,do_inversion], - outputs=[inversion_progress],queue=False).then( - lambda: reconstruct_button.update(visible=False), - outputs=[reconstruct_button]).then( - fn = reset_do_reconstruction, - outputs = [do_reconstruction], - queue = False) - - - # Repeat inversion (and reconstruction) when these params are changed: - src_prompt.change( - fn = reset_do_inversion, - outputs = [do_inversion], queue = False).then( - fn = reset_do_reconstruction, - outputs = [do_reconstruction], queue = False) - - steps.change( - fn = reset_do_inversion, - outputs = [do_inversion], queue = False).then( - fn = reset_do_reconstruction, - outputs = [do_reconstruction], queue = False) - - - src_cfg_scale.change( - fn = reset_do_inversion, - outputs = [do_inversion], queue = False).then( - fn = reset_do_reconstruction, - outputs = [do_reconstruction], queue = False) - - # Repeat only reconstruction these params are changed: - - tar_prompt.change( - fn = reset_do_reconstruction, - outputs = [do_reconstruction], queue = False) - - tar_cfg_scale.change( - fn = reset_do_reconstruction, - outputs = [do_reconstruction], queue = False) - - skip.change( - fn = reset_do_reconstruction, - outputs = [do_reconstruction], queue = False) - - dropdown1.change(fn=update_dropdown_parms, inputs = [dropdown1], outputs = [guidnace_scale_1,warmup_1, threshold_1], queue=False) - dropdown2.change(fn=update_dropdown_parms, inputs = [dropdown2], outputs = [guidnace_scale_2,warmup_2, threshold_2], queue=False) - dropdown3.change(fn=update_dropdown_parms, inputs = [dropdown3], outputs = [guidnace_scale_3,warmup_3, threshold_3], queue=False) - - clear_components = [input_image,ddpm_edited_image,ddpm_edited_image,sega_edited_image, do_inversion, - src_prompt, steps, src_cfg_scale, seed, - tar_prompt, skip, tar_cfg_scale, reconstruct_button,reconstruct_button, - edit_concept_1, guidnace_scale_1,guidnace_scale_1,warmup_1, threshold_1, neg_guidance_1,dropdown1, concept_1, concept_1, row1, - edit_concept_2, guidnace_scale_2,guidnace_scale_2,warmup_2, threshold_2, neg_guidance_2,dropdown2, concept_2, concept_2, row2, - edit_concept_3, guidnace_scale_3,guidnace_scale_3,warmup_3, threshold_3, neg_guidance_3,dropdown3, concept_3,concept_3, row3, - row4,sega_concepts_counter, box1, box2, box3 ] - - clear_components_output_vals = [None, None,ddpm_edited_image.update(visible=False), None, True, - "", DEFAULT_DIFFUSION_STEPS, DEFAULT_SOURCE_GUIDANCE_SCALE, DEFAULT_SEED, - "", DEFAULT_SKIP_STEPS, DEFAULT_TARGET_GUIDANCE_SCALE, reconstruct_button.update(value="Show Reconstruction"),reconstruct_button.update(visible=False), - "", DEFAULT_SEGA_CONCEPT_GUIDANCE_SCALE,guidnace_scale_1.update(visible=False), DEFAULT_WARMUP_STEPS, DEFAULT_THRESHOLD, DEFAULT_NEGATIVE_GUIDANCE, "custom","", concept_1.update(visible=False), row1.update(visible=True), - "", DEFAULT_SEGA_CONCEPT_GUIDANCE_SCALE,guidnace_scale_2.update(visible=False), DEFAULT_WARMUP_STEPS, DEFAULT_THRESHOLD, DEFAULT_NEGATIVE_GUIDANCE, "custom","", concept_2.update(visible=False), row2.update(visible=False), - "", DEFAULT_SEGA_CONCEPT_GUIDANCE_SCALE,guidnace_scale_3.update(visible=False), DEFAULT_WARMUP_STEPS, DEFAULT_THRESHOLD, DEFAULT_NEGATIVE_GUIDANCE, "custom","",concept_3.update(visible=False), row3.update(visible=False), row4.update(visible=False), gr.update(value=0), - box1.update(visible=False), box2.update(visible=False), box3.update(visible=False)] - - - clear_button.click(lambda: clear_components_output_vals, outputs =clear_components) - - reconstruct_button.click(lambda: ddpm_edited_image.update(visible=True), outputs=[ddpm_edited_image]).then(fn = reconstruct, - inputs = [tar_prompt, - image_caption, - tar_cfg_scale, - skip, - wts, zs, - do_reconstruction, - reconstruction, - reconstruct_button], - outputs = [ddpm_edited_image,reconstruction, ddpm_edited_image, do_reconstruction, reconstruct_button]) - - randomize_seed.change( - fn = randomize_seed_fn, - inputs = [seed, randomize_seed], - outputs = [seed], - queue = False) - - share_button.click(None, [], [], _js=share_js) - - gr.Examples( - label='Examples', - fn=swap_visibilities, - run_on_click=True, - examples=get_example(), - inputs=[input_image, - edit_concept_1, - edit_concept_2, - tar_prompt, - sega_edited_image, - guidnace_scale_1, - guidnace_scale_2, - warmup_1, - warmup_2, - neg_guidance_1, - neg_guidance_2, - steps, - skip, - tar_cfg_scale, - sega_concepts_counter - ], - outputs=[share_btn_container, box1, concept_1, guidnace_scale_1,neg_guidance_1, row1, row2,box2, concept_2, guidnace_scale_2,neg_guidance_2,row2, row3,sega_concepts_counter], - cache_examples=True - ) - - -demo.queue() -demo.launch() \ No newline at end of file diff --git a/spaces/epexVfeibi/Imagedeblurr/Adobe Acrobat XI Pro 11.0.20 FINAL Crack Serial Key Keygen.md b/spaces/epexVfeibi/Imagedeblurr/Adobe Acrobat XI Pro 11.0.20 FINAL Crack Serial Key Keygen.md deleted file mode 100644 index 5a67c91360894e2089e6cbb3ebbfb6af033344e6..0000000000000000000000000000000000000000 --- a/spaces/epexVfeibi/Imagedeblurr/Adobe Acrobat XI Pro 11.0.20 FINAL Crack Serial Key Keygen.md +++ /dev/null @@ -1,122 +0,0 @@ - -

                                                                                                                                                Adobe Acrobat XI Pro 11.0.20 FINAL Crack Serial Key keygen: How to Download and Install

                                                                                                                                                - -

                                                                                                                                                If you are looking for a powerful PDF converter and editor, you might want to check out Adobe Acrobat XI Pro 11.0.20 FINAL Crack Serial Key keygen. This is a torrent file that contains the full version of Adobe Acrobat XI Pro 11.0.20 with crack, serial key and keygen. You can use this software to create, edit, convert, sign, protect and share PDF files with ease.

                                                                                                                                                -

                                                                                                                                                Adobe Acrobat XI Pro 11.0.20 FINAL Crack Serial Key keygen


                                                                                                                                                Download File ••• https://jinyurl.com/2uEnFV



                                                                                                                                                - -

                                                                                                                                                What is Adobe Acrobat XI Pro 11.0.20?

                                                                                                                                                - -

                                                                                                                                                Adobe Acrobat XI Pro 11.0.20 is the latest version of Adobe Acrobat, the leading PDF software in the market. It has many features and tools that make it easy to work with PDF files, such as:

                                                                                                                                                - -
                                                                                                                                                  -
                                                                                                                                                • Editing text and images in PDFs with a new point-and-click interface.
                                                                                                                                                • -
                                                                                                                                                • Converting PDF files to PowerPoint presentations with full editing capabilities.
                                                                                                                                                • -
                                                                                                                                                • Creating new PDF and web forms with the Adobe FormsCentral desktop app.
                                                                                                                                                • -
                                                                                                                                                • Standardizing routine PDF tasks with Actions.
                                                                                                                                                • -
                                                                                                                                                • Creating and analyzing forms online and collecting responses in real time.
                                                                                                                                                • -
                                                                                                                                                • Customizing PDF Portfolios with new layouts, themes and color palettes.
                                                                                                                                                • -
                                                                                                                                                • Securing PDF files with passwords, permissions and digital signatures.
                                                                                                                                                • -
                                                                                                                                                • Integrating with cloud services such as Adobe Document Cloud and Dropbox.
                                                                                                                                                • -
                                                                                                                                                - -

                                                                                                                                                How to Download Adobe Acrobat XI Pro 11.0.20 FINAL Crack Serial Key keygen?

                                                                                                                                                - -

                                                                                                                                                To download Adobe Acrobat XI Pro 11.0.20 FINAL Crack Serial Key keygen, you need to have a torrent client installed on your computer, such as uTorrent or BitTorrent. Then, you need to follow these steps:

                                                                                                                                                - -
                                                                                                                                                  -
                                                                                                                                                1. Click on the link below to download the torrent file of Adobe Acrobat XI Pro 11.0.20 FINAL Crack Serial Key keygen.
                                                                                                                                                2. -
                                                                                                                                                3. Open the torrent file with your torrent client and start downloading the software.
                                                                                                                                                4. -
                                                                                                                                                5. Wait until the download is complete and then extract the files using WinRAR or 7-Zip.
                                                                                                                                                6. -
                                                                                                                                                7. Run the setup file of Adobe Acrobat XI Pro 11.0.20 and follow the installation instructions.
                                                                                                                                                8. -
                                                                                                                                                9. Copy the crack file from the crack folder and paste it into the installation directory of Adobe Acrobat XI Pro 11.0.20.
                                                                                                                                                10. -
                                                                                                                                                11. Run the keygen file from the keygen folder and generate a serial key for Adobe Acrobat XI Pro 11.0.20.
                                                                                                                                                12. -
                                                                                                                                                13. Launch Adobe Acrobat XI Pro 11.0.20 and enter the serial key when prompted.
                                                                                                                                                14. -
                                                                                                                                                15. Enjoy using Adobe Acrobat XI Pro 11.0.20 with full features and functions.
                                                                                                                                                16. -
                                                                                                                                                - -

                                                                                                                                                Download Link

                                                                                                                                                - -

                                                                                                                                                You can download Adobe Acrobat XI Pro 11.0.20 FINAL Crack Serial Key keygen from this link:

                                                                                                                                                - -

                                                                                                                                                Adobe Acrobat XI Pro 11.0.20 FINAL + Crack [TechTools]

                                                                                                                                                - -

                                                                                                                                                Conclusion

                                                                                                                                                - -

                                                                                                                                                Adobe Acrobat XI Pro 11.0.20 FINAL Crack Serial Key keygen is a great software for working with PDF files. It has many features and tools that make it easy to create, edit, convert, sign, protect and share PDF files with ease. You can download it from the link above and install it using the crack, serial key and keygen provided in the torrent file.

                                                                                                                                                -

                                                                                                                                                - -

                                                                                                                                                If you have any questions or problems regarding Adobe Acrobat XI Pro 11.0.20 FINAL Crack Serial Key keygen, feel free to leave a comment below or contact us through our website.

                                                                                                                                                -

                                                                                                                                                What are the Features of Adobe Acrobat XI Pro 11.0.20?

                                                                                                                                                - -

                                                                                                                                                Adobe Acrobat XI Pro 11.0.20 is not just a PDF converter and editor, but also a powerful software that offers many features and tools to enhance your PDF experience. Some of the features of Adobe Acrobat XI Pro 11.0.20 are:

                                                                                                                                                - -
                                                                                                                                                  -
                                                                                                                                                • Accessibility: You can create PDFs and verify their accessibility in a few quick steps, ensuring that your PDFs are compliant with the standards and regulations for people with disabilities.
                                                                                                                                                • -
                                                                                                                                                • Action Wizard: You can create and apply custom actions to any PDF to save time and keystrokes, automating repetitive tasks such as optimizing, securing, or archiving PDFs.
                                                                                                                                                • -
                                                                                                                                                • ISO Standards: You can convert your PDFs to PDF/X, PDF/A, or PDF/E formats to comply with the industry standards for printing, archiving, or engineering purposes.
                                                                                                                                                • -
                                                                                                                                                • Cloud Services: You can integrate your PDFs with cloud services such as Adobe Document Cloud and Dropbox, allowing you to access, store, share, and collaborate on your PDFs from anywhere.
                                                                                                                                                • -
                                                                                                                                                - -

                                                                                                                                                How to Crack Adobe Acrobat XI Pro 11.0.20 with Serial Key and Keygen?

                                                                                                                                                - -

                                                                                                                                                If you want to use Adobe Acrobat XI Pro 11.0.20 with full features and functions, you need to crack it with serial key and keygen. This will allow you to bypass the activation process and use the software without any limitations. To crack Adobe Acrobat XI Pro 11.0.20 with serial key and keygen, you need to follow these steps:

                                                                                                                                                - -
                                                                                                                                                  -
                                                                                                                                                1. Download Adobe Acrobat XI Pro 11.0.20 FINAL Crack Serial Key keygen from the link above and extract the files using WinRAR or 7-Zip.
                                                                                                                                                2. -
                                                                                                                                                3. Run the setup file of Adobe Acrobat XI Pro 11.0.20 and follow the installation instructions.
                                                                                                                                                4. -
                                                                                                                                                5. Copy the crack file from the crack folder and paste it into the installation directory of Adobe Acrobat XI Pro 11.0.20.
                                                                                                                                                6. -
                                                                                                                                                7. Run the keygen file from the keygen folder and generate a serial key for Adobe Acrobat XI Pro 11.0.20.
                                                                                                                                                8. -
                                                                                                                                                9. Launch Adobe Acrobat XI Pro 11.0.20 and enter the serial key when prompted.
                                                                                                                                                10. -
                                                                                                                                                11. Enjoy using Adobe Acrobat XI Pro 11.0.20 with full features and functions.
                                                                                                                                                12. -
                                                                                                                                                - -

                                                                                                                                                Why Choose Adobe Acrobat XI Pro 11.0.20 FINAL Crack Serial Key keygen?

                                                                                                                                                - -

                                                                                                                                                Adobe Acrobat XI Pro 11.0.20 FINAL Crack Serial Key keygen is a great choice for anyone who wants to work with PDF files in a professional and efficient way. By downloading this torrent file, you can get the full version of Adobe Acrobat XI Pro 11.0.20 with crack, serial key and keygen, which will enable you to use all the features and tools of this software without any restrictions or costs. You can create, edit, convert, sign, protect and share PDF files with ease, as well as enjoy many other benefits such as accessibility, action wizard, ISO standards, and cloud services.

                                                                                                                                                - -

                                                                                                                                                Adobe Acrobat XI Pro 11.0.20 FINAL Crack Serial Key keygen is a reliable and safe torrent file that has been tested and verified by many users around the world. You can download it from the link above and install it on your computer without any problems or risks.

                                                                                                                                                - -

                                                                                                                                                If you want to work with PDF files in a professional and efficient way, don't hesitate to download Adobe Acrobat XI Pro 11.0.20 FINAL Crack Serial Key keygen today and enjoy all the advantages of this powerful software.

                                                                                                                                                -

                                                                                                                                                What are the Benefits of Adobe Acrobat XI Pro 11.0.20 FINAL Crack Serial Key keygen?

                                                                                                                                                - -

                                                                                                                                                Adobe Acrobat XI Pro 11.0.20 FINAL Crack Serial Key keygen is not only a powerful software for working with PDF files, but also a beneficial one for many users and professionals. Some of the benefits of Adobe Acrobat XI Pro 11.0.20 FINAL Crack Serial Key keygen are:

                                                                                                                                                - -
                                                                                                                                                  -
                                                                                                                                                • Compatibility: You can use Adobe Acrobat XI Pro 11.0.20 on both Windows and macOS platforms, and enjoy almost identical versions and features on both operating systems.
                                                                                                                                                • -
                                                                                                                                                • Flexibility: You can use Adobe Acrobat XI Pro 11.0.20 to create, edit, convert, sign, protect and share PDF files in various formats and applications, such as Word, Excel, PowerPoint, HTML, JPG, PNG, GIF, and more.
                                                                                                                                                • -
                                                                                                                                                • Productivity: You can use Adobe Acrobat XI Pro 11.0.20 to streamline your workflow and save time and effort, by using features such as editing text and images in PDFs, converting PDF files to PowerPoint presentations, creating and analyzing forms online, standardizing routine PDF tasks with Actions, and more.
                                                                                                                                                • -
                                                                                                                                                • Security: You can use Adobe Acrobat XI Pro 11.0.20 to protect your PDF files and data from unauthorized access and modification, by using features such as securing PDF files with passwords, permissions and digital signatures, redacting sensitive information from PDFs, verifying the authenticity of PDFs with certificates, and more.
                                                                                                                                                • -
                                                                                                                                                - -

                                                                                                                                                What are the Reviews of Adobe Acrobat XI Pro 11.0.20 FINAL Crack Serial Key keygen?

                                                                                                                                                - -

                                                                                                                                                Adobe Acrobat XI Pro 11.0.20 FINAL Crack Serial Key keygen is a highly rated software by many users and professionals who have used it and reviewed it online. Some of the reviews of Adobe Acrobat XI Pro 11.0.20 FINAL Crack Serial Key keygen are:

                                                                                                                                                - -
                                                                                                                                                -

                                                                                                                                                "Adobe Acrobat Pro is the powerhouse of PDF editing and management apps from the company that created the format...A subscription isn’t cheap, but the app’s capability and flexibility are worth the price." - PCMag

                                                                                                                                                -
                                                                                                                                                - -
                                                                                                                                                -

                                                                                                                                                "Adobe Acrobat Pro is a great choice for anyone who wants to work with PDF files in a professional and efficient way...By downloading this torrent file, you can get the full version of Adobe Acrobat XI Pro 11.0.20 with crack, serial key and keygen...You can create, edit, convert, sign, protect and share PDF files with ease..." - SolidTorrents

                                                                                                                                                -
                                                                                                                                                - -
                                                                                                                                                -

                                                                                                                                                "I'm attempting to run an update on my newly installed adobe acrobat XI pro...The installation went smoothly; serial number validated and program opened up with no issues...I'm very pleased with this product." - Adobe Support Community

                                                                                                                                                -
                                                                                                                                                - -

                                                                                                                                                Conclusion

                                                                                                                                                - -

                                                                                                                                                Adobe Acrobat XI Pro 11.0.20 FINAL Crack Serial Key keygen is a great software for working with PDF files in a professional and efficient way. It has many features and tools that make it easy to create, edit, convert, sign, protect and share PDF files with ease, as well as enjoy many other benefits such as accessibility, action wizard, ISO standards, and cloud services.

                                                                                                                                                - -

                                                                                                                                                Adobe Acrobat XI Pro 11.0.20 FINAL Crack Serial Key keygen is a reliable and safe torrent file that has been tested and verified by many users around the world. You can download it from the link above and install it on your computer without any problems or risks.

                                                                                                                                                - -

                                                                                                                                                If you want to work with PDF files in a professional and efficient way, don't hesitate to download Adobe Acrobat XI Pro 11.0.20 FINAL Crack Serial Key keygen today and enjoy all the advantages of this powerful software.

                                                                                                                                                -

                                                                                                                                                Conclusion

                                                                                                                                                - -

                                                                                                                                                Adobe Acrobat XI Pro 11.0.20 FINAL Crack Serial Key keygen is a great software for working with PDF files in a professional and efficient way. It has many features and tools that make it easy to create, edit, convert, sign, protect and share PDF files with ease, as well as enjoy many other benefits such as accessibility, action wizard, ISO standards, and cloud services.

                                                                                                                                                - -

                                                                                                                                                Adobe Acrobat XI Pro 11.0.20 FINAL Crack Serial Key keygen is a reliable and safe torrent file that has been tested and verified by many users around the world. You can download it from the link above and install it on your computer without any problems or risks.

                                                                                                                                                - -

                                                                                                                                                If you want to work with PDF files in a professional and efficient way, don't hesitate to download Adobe Acrobat XI Pro 11.0.20 FINAL Crack Serial Key keygen today and enjoy all the advantages of this powerful software.

                                                                                                                                                3cee63e6c2
                                                                                                                                                -
                                                                                                                                                -
                                                                                                                                                \ No newline at end of file diff --git a/spaces/f2api/gpt-academic/request_llm/bridge_tgui.py b/spaces/f2api/gpt-academic/request_llm/bridge_tgui.py deleted file mode 100644 index fcf852f0474892bd179843ece3f4a83110bd7756..0000000000000000000000000000000000000000 --- a/spaces/f2api/gpt-academic/request_llm/bridge_tgui.py +++ /dev/null @@ -1,171 +0,0 @@ -''' -Contributed by SagsMug. Modified by binary-husky -https://github.com/oobabooga/text-generation-webui/pull/175 -''' - -import asyncio -import json -import random -import string -import websockets -import logging -import time -import threading -import importlib -from toolbox import get_conf, update_ui - - -def random_hash(): - letters = string.ascii_lowercase + string.digits - return ''.join(random.choice(letters) for i in range(9)) - -async def run(context, max_token, temperature, top_p, addr, port): - params = { - 'max_new_tokens': max_token, - 'do_sample': True, - 'temperature': temperature, - 'top_p': top_p, - 'typical_p': 1, - 'repetition_penalty': 1.05, - 'encoder_repetition_penalty': 1.0, - 'top_k': 0, - 'min_length': 0, - 'no_repeat_ngram_size': 0, - 'num_beams': 1, - 'penalty_alpha': 0, - 'length_penalty': 1, - 'early_stopping': True, - 'seed': -1, - } - session = random_hash() - - async with websockets.connect(f"ws://{addr}:{port}/queue/join") as websocket: - while content := json.loads(await websocket.recv()): - #Python3.10 syntax, replace with if elif on older - if content["msg"] == "send_hash": - await websocket.send(json.dumps({ - "session_hash": session, - "fn_index": 12 - })) - elif content["msg"] == "estimation": - pass - elif content["msg"] == "send_data": - await websocket.send(json.dumps({ - "session_hash": session, - "fn_index": 12, - "data": [ - context, - params['max_new_tokens'], - params['do_sample'], - params['temperature'], - params['top_p'], - params['typical_p'], - params['repetition_penalty'], - params['encoder_repetition_penalty'], - params['top_k'], - params['min_length'], - params['no_repeat_ngram_size'], - params['num_beams'], - params['penalty_alpha'], - params['length_penalty'], - params['early_stopping'], - params['seed'], - ] - })) - elif content["msg"] == "process_starts": - pass - elif content["msg"] in ["process_generating", "process_completed"]: - yield content["output"]["data"][0] - # You can search for your desired end indicator and - # stop generation by closing the websocket here - if (content["msg"] == "process_completed"): - break - - - - - -def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_prompt='', stream = True, additional_fn=None): - """ - 发送至chatGPT,流式获取输出。 - 用于基础的对话功能。 - inputs 是本次问询的输入 - top_p, temperature是chatGPT的内部调优参数 - history 是之前的对话列表(注意无论是inputs还是history,内容太长了都会触发token数量溢出的错误) - chatbot 为WebUI中显示的对话列表,修改它,然后yeild出去,可以直接修改对话界面内容 - additional_fn代表点击的哪个按钮,按钮见functional.py - """ - if additional_fn is not None: - import core_functional - importlib.reload(core_functional) # 热更新prompt - core_functional = core_functional.get_core_functions() - if "PreProcess" in core_functional[additional_fn]: inputs = core_functional[additional_fn]["PreProcess"](inputs) # 获取预处理函数(如果有的话) - inputs = core_functional[additional_fn]["Prefix"] + inputs + core_functional[additional_fn]["Suffix"] - - raw_input = "What I would like to say is the following: " + inputs - history.extend([inputs, ""]) - chatbot.append([inputs, ""]) - yield from update_ui(chatbot=chatbot, history=history, msg="等待响应") # 刷新界面 - - prompt = raw_input - tgui_say = "" - - model_name, addr_port = llm_kwargs['llm_model'].split('@') - assert ':' in addr_port, "LLM_MODEL 格式不正确!" + llm_kwargs['llm_model'] - addr, port = addr_port.split(':') - - - mutable = ["", time.time()] - def run_coorotine(mutable): - async def get_result(mutable): - # "tgui:galactica-1.3b@localhost:7860" - - async for response in run(context=prompt, max_token=llm_kwargs['max_length'], - temperature=llm_kwargs['temperature'], - top_p=llm_kwargs['top_p'], addr=addr, port=port): - print(response[len(mutable[0]):]) - mutable[0] = response - if (time.time() - mutable[1]) > 3: - print('exit when no listener') - break - asyncio.run(get_result(mutable)) - - thread_listen = threading.Thread(target=run_coorotine, args=(mutable,), daemon=True) - thread_listen.start() - - while thread_listen.is_alive(): - time.sleep(1) - mutable[1] = time.time() - # Print intermediate steps - if tgui_say != mutable[0]: - tgui_say = mutable[0] - history[-1] = tgui_say - chatbot[-1] = (history[-2], history[-1]) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - - - - -def predict_no_ui_long_connection(inputs, llm_kwargs, history, sys_prompt, observe_window, console_slience=False): - raw_input = "What I would like to say is the following: " + inputs - prompt = raw_input - tgui_say = "" - model_name, addr_port = llm_kwargs['llm_model'].split('@') - assert ':' in addr_port, "LLM_MODEL 格式不正确!" + llm_kwargs['llm_model'] - addr, port = addr_port.split(':') - - - def run_coorotine(observe_window): - async def get_result(observe_window): - async for response in run(context=prompt, max_token=llm_kwargs['max_length'], - temperature=llm_kwargs['temperature'], - top_p=llm_kwargs['top_p'], addr=addr, port=port): - print(response[len(observe_window[0]):]) - observe_window[0] = response - if (time.time() - observe_window[1]) > 5: - print('exit when no listener') - break - asyncio.run(get_result(observe_window)) - thread_listen = threading.Thread(target=run_coorotine, args=(observe_window,)) - thread_listen.start() - return observe_window[0] diff --git a/spaces/falterWliame/Face_Mask_Detection/Crack Keygen !!TOP!! Structural Analysis For Revit 2014 Download.md b/spaces/falterWliame/Face_Mask_Detection/Crack Keygen !!TOP!! Structural Analysis For Revit 2014 Download.md deleted file mode 100644 index 564d684b6a5a8a9df46af9b224a596340166de53..0000000000000000000000000000000000000000 --- a/spaces/falterWliame/Face_Mask_Detection/Crack Keygen !!TOP!! Structural Analysis For Revit 2014 Download.md +++ /dev/null @@ -1,136 +0,0 @@ -
                                                                                                                                                -

                                                                                                                                                How to Download and Activate Structural Analysis for Revit 2014 with Crack Keygen

                                                                                                                                                - -

                                                                                                                                                Structural Analysis for Revit 2014 is a software that enables you to perform structural analysis and design of building models created in Autodesk Revit. It allows you to optimize your structural design, improve collaboration, and reduce errors. However, if you want to use this software, you need to purchase a license from Autodesk, which can be expensive and time-consuming. Fortunately, there is a way to use Structural Analysis for Revit 2014 without paying for a license: by using crack keygen.

                                                                                                                                                -

                                                                                                                                                crack Keygen Structural Analysis for Revit 2014 download


                                                                                                                                                Download Filehttps://urlca.com/2uDcWS



                                                                                                                                                - -

                                                                                                                                                Crack keygen is a software tool that can generate serial numbers or activation codes for various software products, including Structural Analysis for Revit 2014. By using crack keygen, you can bypass the official activation process and use the software for free. In this article, we will show you how to download and activate Structural Analysis for Revit 2014 with crack keygen.

                                                                                                                                                - -

                                                                                                                                                Step 1: Download Structural Analysis for Revit 2014 and Crack Keygen

                                                                                                                                                - -

                                                                                                                                                The first step is to download Structural Analysis for Revit 2014 and crack keygen from a reliable website. You can find many websites that offer these files, but be careful: some of them may contain viruses or malware that can harm your computer or steal your personal information. You should always scan any file you download from the internet with a reputable antivirus program before opening it.

                                                                                                                                                - -

                                                                                                                                                One of the websites that we recommend is Civil MDC, which provides various Autodesk products and crack keygens. You can visit their website and search for "Autodesk 2014 ALL-Products + X-Force (KeyGenerator)" or "X-force KeyGenerator. Autodesk Products. (2014) ALL". You will find links to download both Structural Analysis for Revit 2014 and crack keygen. The password to extract the files is www.civilmdc.com.

                                                                                                                                                - -

                                                                                                                                                Step 2: Install Structural Analysis for Revit 2014

                                                                                                                                                - -

                                                                                                                                                The next step is to install Structural Analysis for Revit 2014 on your computer. To do this, follow these instructions:

                                                                                                                                                -

                                                                                                                                                - -
                                                                                                                                                  -
                                                                                                                                                1. Extract the downloaded file and run the setup.exe file.
                                                                                                                                                2. -
                                                                                                                                                3. Follow the installation wizard and accept the terms and conditions.
                                                                                                                                                4. -
                                                                                                                                                5. Enter the serial number and product key that are provided in the readme.txt file.
                                                                                                                                                6. -
                                                                                                                                                7. Complete the installation and restart your computer.
                                                                                                                                                8. -
                                                                                                                                                9. Do not launch the software yet.
                                                                                                                                                10. -
                                                                                                                                                - -

                                                                                                                                                Step 3: Run Crack Keygen and Generate Activation Code

                                                                                                                                                - -

                                                                                                                                                The final step is to run crack keygen and generate an activation code for Structural Analysis for Revit 2014. To do this, follow these instructions:

                                                                                                                                                - -
                                                                                                                                                  -
                                                                                                                                                1. Extract the downloaded file and run the x-force_2014_x32.exe or x-force_2014_x64.exe file as administrator, depending on your system.
                                                                                                                                                2. -
                                                                                                                                                3. Select Autodesk Structural Analysis for Revit 2014 from the list of products and click on Generate.
                                                                                                                                                4. -
                                                                                                                                                5. Copy the generated activation code and paste it into a text file.
                                                                                                                                                6. -
                                                                                                                                                - -

                                                                                                                                                Step 4: Activate Structural Analysis for Revit 2014 with Crack Keygen

                                                                                                                                                - -

                                                                                                                                                The last step is to activate Structural Analysis for Revit 2014 with crack keygen. To do this, follow these instructions:

                                                                                                                                                - -
                                                                                                                                                  -
                                                                                                                                                1. Launch Structural Analysis for Revit 2014 on your computer.
                                                                                                                                                2. -
                                                                                                                                                3. Click on Activate and if it tells you that your serial number is wrong, simply click on Close and click on Activate again.
                                                                                                                                                4. -
                                                                                                                                                5. Select I have an activation code from Autodesk.
                                                                                                                                                6. -
                                                                                                                                                7. Copy the request code from the activation screen and paste it into the crack keygen.
                                                                                                                                                8. -
                                                                                                                                                9. Click on Patch (you should see Successfully patched).
                                                                                                                                                10. -
                                                                                                                                                11. Copy the activation code from the text file and paste it into the activation screen.
                                                                                                                                                12. -
                                                                                                                                                13. Click on Next. You should see a message that says your product has been successfully activated.
                                                                                                                                                14. -
                                                                                                                                                - -

                                                                                                                                                Congratulations! You have successfully downloaded and activated Structural Analysis for Revit 2014 with crack keygen!

                                                                                                                                                - -

                                                                                                                                                You can now use Structural Analysis for Revit 2014 without any limitations. However, we remind you that using crack keygen is illegal and unethical, and may cause legal, security, performance, or compatibility issues. We advise you to purchase a legitimate license from Autodesk and support their work by paying for their software products.

                                                                                                                                                -

                                                                                                                                                What are the Benefits of Using Structural Analysis for Revit 2014?

                                                                                                                                                - -

                                                                                                                                                Structural Analysis for Revit 2014 is a software that can help you improve your structural design and analysis workflow. Some of the benefits of using this software are:

                                                                                                                                                - -
                                                                                                                                                  -
                                                                                                                                                • It integrates seamlessly with Autodesk Revit, allowing you to perform structural analysis and design on the same model without exporting or importing data.
                                                                                                                                                • -
                                                                                                                                                • It supports various types of structural elements, such as beams, columns, walls, floors, foundations, braces, trusses, and more.
                                                                                                                                                • -
                                                                                                                                                • It provides various analysis methods, such as linear static, modal, response spectrum, pushover, and nonlinear static.
                                                                                                                                                • -
                                                                                                                                                • It allows you to visualize and explore the results of your analysis, such as displacements, forces, stresses, reactions, and more.
                                                                                                                                                • -
                                                                                                                                                • It enables you to create detailed reports and documentation of your analysis results and design decisions.
                                                                                                                                                • -
                                                                                                                                                - -

                                                                                                                                                What are the Alternatives to Crack Keygen Structural Analysis for Revit 2014?

                                                                                                                                                - -

                                                                                                                                                If you are not comfortable with using crack keygen structural analysis for Revit 2014 or you want to avoid the risks associated with it, you may want to consider some alternatives. Some of these alternatives are:

                                                                                                                                                - -
                                                                                                                                                  -
                                                                                                                                                • Purchase a license from Autodesk: This is the most legal and ethical way to use Structural Analysis for Revit 2014. You can choose from various subscription plans that suit your needs and budget. You can also enjoy the full features and benefits of the software, as well as updates, patches, and support from Autodesk.
                                                                                                                                                • -
                                                                                                                                                • Use a free trial version: Autodesk offers a free trial version of Structural Analysis for Revit 2014 that you can use for 30 days. This can be a good option if you want to test the software before buying it or if you only need it for a short-term project. However, you will not be able to save or print your work after the trial period expires.
                                                                                                                                                • -
                                                                                                                                                • Use an alternative software: There are other software products that can perform structural analysis and design of building models. Some examples are SAP2000, ETABS, STAAD.Pro, Robot Structural Analysis Professional, and more. You can compare their features, prices, and reviews and choose the one that best suits your needs.
                                                                                                                                                • -
                                                                                                                                                -

                                                                                                                                                How to Use Structural Analysis for Revit 2014 for Your Projects?

                                                                                                                                                - -

                                                                                                                                                Now that you have downloaded and activated Structural Analysis for Revit 2014 with crack keygen, you may wonder how to use it for your projects. Here are some tips and tricks to help you get started:

                                                                                                                                                - -
                                                                                                                                                  -
                                                                                                                                                • Open Autodesk Revit and create or open a building model that you want to analyze.
                                                                                                                                                • -
                                                                                                                                                • Go to the Analyze tab and click on Structural Analysis for Revit. This will launch the software and connect it with your Revit model.
                                                                                                                                                • -
                                                                                                                                                • Select the structural elements that you want to include in your analysis. You can use filters, selection sets, or manual selection.
                                                                                                                                                • -
                                                                                                                                                • Define the loads and load combinations that apply to your model. You can use predefined or custom load types, such as dead, live, wind, seismic, etc.
                                                                                                                                                • -
                                                                                                                                                • Specify the analysis settings, such as analysis method, units, solver options, etc.
                                                                                                                                                • -
                                                                                                                                                • Run the analysis and wait for the results to be calculated.
                                                                                                                                                • -
                                                                                                                                                • Review and explore the results using various tools, such as tables, charts, diagrams, color maps, etc.
                                                                                                                                                • -
                                                                                                                                                • Create reports and documentation of your analysis results and design decisions. You can export them to PDF, Excel, Word, or other formats.
                                                                                                                                                • -
                                                                                                                                                - -

                                                                                                                                                How to Update or Uninstall Structural Analysis for Revit 2014?

                                                                                                                                                - -

                                                                                                                                                If you want to update or uninstall Structural Analysis for Revit 2014, you need to follow some steps to avoid any problems. Here are some instructions:

                                                                                                                                                - -
                                                                                                                                                  -
                                                                                                                                                • To update Structural Analysis for Revit 2014, you need to download the latest version from Autodesk website and install it over the existing one. You may need to use crack keygen again to activate it.
                                                                                                                                                • -
                                                                                                                                                • To uninstall Structural Analysis for Revit 2014, you need to go to Control Panel > Programs and Features and select Autodesk Structural Analysis for Revit 2014 from the list of programs. Click on Uninstall and follow the prompts. You may also need to delete any leftover files or folders from your computer.
                                                                                                                                                • -
                                                                                                                                                - -

                                                                                                                                                Conclusion

                                                                                                                                                - -

                                                                                                                                                In this article, we have shown you how to download and activate Structural Analysis for Revit 2014 with crack keygen. We have also given you some tips on how to use it for your projects, and how to update or uninstall it. We hope this article has been helpful and informative for you. However, we remind you that using crack keygen is illegal and unethical, and may cause legal, security, performance, or compatibility issues. We advise you to purchase a legitimate license from Autodesk and support their work by paying for their software products.

                                                                                                                                                -

                                                                                                                                                How to Download and Install X-Force Keygen for Autodesk Products 2014?

                                                                                                                                                - -

                                                                                                                                                X-Force Keygen is a software tool that can generate serial numbers or activation codes for various Autodesk products, including Structural Analysis for Revit 2014. You can use X-Force Keygen to activate any Autodesk product without paying for a license. However, you need to download and install X-Force Keygen correctly to avoid any errors or issues. Here are the steps to download and install X-Force Keygen for Autodesk Products 2014:

                                                                                                                                                - -
                                                                                                                                                  -
                                                                                                                                                1. Go to a trusted website that offers X-Force Keygen for Autodesk Products 2014. One of the websites that we recommend is Civil MDC, which provides various Autodesk products and crack keygens. You can visit their website and search for "X-force KeyGenerator. Autodesk Products. (2014) ALL". You will find a link to download X-Force Keygen for Autodesk Products 2014.
                                                                                                                                                2. -
                                                                                                                                                3. Download the file and extract it using a password. The password is www.civilmdc.com.
                                                                                                                                                4. -
                                                                                                                                                5. Run the x-force_2014_x32.exe or x-force_2014_x64.exe file as administrator, depending on your system.
                                                                                                                                                6. -
                                                                                                                                                7. You will see a window with a list of Autodesk products. Select the product that you want to activate and click on Generate.
                                                                                                                                                8. -
                                                                                                                                                9. You will get an activation code that you can use to activate your Autodesk product.
                                                                                                                                                10. -
                                                                                                                                                - -

                                                                                                                                                How to Use X-Force Keygen to Activate Structural Analysis for Revit 2014?

                                                                                                                                                - -

                                                                                                                                                Once you have downloaded and installed X-Force Keygen for Autodesk Products 2014, you can use it to activate Structural Analysis for Revit 2014. Here are the steps to use X-Force Keygen to activate Structural Analysis for Revit 2014:

                                                                                                                                                - -
                                                                                                                                                  -
                                                                                                                                                1. Launch Structural Analysis for Revit 2014 on your computer.
                                                                                                                                                2. -
                                                                                                                                                3. Click on Activate and if it tells you that your serial number is wrong, simply click on Close and click on Activate again.
                                                                                                                                                4. -
                                                                                                                                                5. Select I have an activation code from Autodesk.
                                                                                                                                                6. -
                                                                                                                                                7. Copy the request code from the activation screen and paste it into X-Force Keygen.
                                                                                                                                                8. -
                                                                                                                                                9. Click on Patch (you should see Successfully patched).
                                                                                                                                                10. -
                                                                                                                                                11. Copy the activation code from X-Force Keygen and paste it into the activation screen.
                                                                                                                                                12. -
                                                                                                                                                13. Click on Next. You should see a message that says your product has been successfully activated.
                                                                                                                                                14. -
                                                                                                                                                - -

                                                                                                                                                Congratulations! You have successfully activated Structural Analysis for Revit 2014 with X-Force Keygen!

                                                                                                                                                - -

                                                                                                                                                You can now use Structural Analysis for Revit 2014 without any limitations. However, we remind you that using X-Force Keygen is illegal and unethical, and may cause legal, security, performance, or compatibility issues. We advise you to purchase a legitimate license from Autodesk and support their work by paying for their software products.

                                                                                                                                                -

                                                                                                                                                Conclusion

                                                                                                                                                - -

                                                                                                                                                In this article, we have shown you how to download and activate Structural Analysis for Revit 2014 with crack keygen and X-Force Keygen. We have also given you some tips on how to use it for your projects, how to troubleshoot it, how to update or uninstall it, and how to learn more about it. We hope this article has been helpful and informative for you. However, we remind you that using crack keygen or X-Force Keygen is illegal and unethical, and may cause legal, security, performance, or compatibility issues. We advise you to purchase a legitimate license from Autodesk and support their work by paying for their software products.

                                                                                                                                                3cee63e6c2
                                                                                                                                                -
                                                                                                                                                -
                                                                                                                                                \ No newline at end of file diff --git a/spaces/fatiXbelha/sd/Car Parking Multiplayer A Fun and Challenging Game with MOD APK.md b/spaces/fatiXbelha/sd/Car Parking Multiplayer A Fun and Challenging Game with MOD APK.md deleted file mode 100644 index 3fa3dc6cbc0322915a7fa97a198014b2e60f3070..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Car Parking Multiplayer A Fun and Challenging Game with MOD APK.md +++ /dev/null @@ -1,97 +0,0 @@ - -

                                                                                                                                                What is APK Mod Car Parking Multiplayer?

                                                                                                                                                -

                                                                                                                                                APK Mod Car Parking Multiplayer is a modified version of the popular driving simulation game Car Parking Multiplayer. The game allows you to experience realistic car parking and driving challenges in various environments and scenarios. You can also explore the open world, interact with other players, customize your cars, and enjoy different modes of gameplay.

                                                                                                                                                -

                                                                                                                                                The APK mod version of the game gives you some advantages over the original version, such as unlimited money, unlocked cars, free shopping, and more. You can use these features to enhance your gameplay and have more fun.

                                                                                                                                                -

                                                                                                                                                apk mod car parking multiplayer


                                                                                                                                                Download Zip ★★★★★ https://urllie.com/2uNuNB



                                                                                                                                                -

                                                                                                                                                Why You Should Play APK Mod Car Parking Multiplayer

                                                                                                                                                -

                                                                                                                                                There are many reasons why you should play APK Mod Car Parking Multiplayer. Here are some of them:

                                                                                                                                                -

                                                                                                                                                Realistic and Fun Gameplay

                                                                                                                                                -

                                                                                                                                                The game features realistic driving physics and car interiors that make you feel like you are driving a real car. You can choose from over 130 car models with different specifications and performance. You can also adjust the suspension, wheel angle, engine, turbo, gearbox, exhaust, and more to suit your driving style.

                                                                                                                                                -

                                                                                                                                                [Car Parking Multiplayer MOD APK 4.8.9.4.4 (Unlimited Money) - APKdone](^1^)

                                                                                                                                                -

                                                                                                                                                The game also offers 82 real-life parking and driving challenges that test your skills and accuracy. You can drive different vehicles such as tow trucks, pickups, trucks, sports cars, and classic cars. You can also try different modes such as drag racing, chasing, police mode, taxi mode, delivery mode, and more.

                                                                                                                                                -

                                                                                                                                                Open World and Multiplayer Mode

                                                                                                                                                -

                                                                                                                                                The game lets you explore the open world with real gas stations and car services. You can free walk around the city or drive your car to various locations. You can also compete against real players in the multiplayer racing mode or exchange cars with them. You can join thousands of players online every day and chat with them using voice chat or text chat.

                                                                                                                                                -

                                                                                                                                                The game also has a friend list feature that allows you to add your friends and play with them. You can also create or join teams and cooperate with other players. You can also role-play as a police officer and catch or fine players for speeding or breaking the rules.

                                                                                                                                                -

                                                                                                                                                Car Customization and Tuning

                                                                                                                                                -

                                                                                                                                                The game gives you the opportunity to customize your car according to your preferences. You can change the color, vinyls, body parts, wheels, tires, lights, spoilers, bumpers, hoods, grills, mirrors, windows, plates, stickers, flags, horns, sirens, neons, smoke effects, and more. You can also tune your car's performance by swapping the engine, turbo, gearbox, exhaust, suspension, wheel angle,

                                                                                                                                                brakes, and more. You can also add accessories such as nitro, hydraulics, speakers, subwoofers, monitors, cameras, and more. You can also save your car designs and share them with other players.

                                                                                                                                                -

                                                                                                                                                How to Download and Install APK Mod Car Parking Multiplayer

                                                                                                                                                -

                                                                                                                                                If you want to play APK Mod Car Parking Multiplayer, you need to download and install it on your Android device. Here are the requirements and precautions you need to follow:

                                                                                                                                                -

                                                                                                                                                Requirements and Precautions

                                                                                                                                                -
                                                                                                                                                  -
                                                                                                                                                • You need an Android device with at least 4.4 version and 1 GB of RAM.
                                                                                                                                                • -
                                                                                                                                                • You need to enable the installation of apps from unknown sources in your device settings.
                                                                                                                                                • -
                                                                                                                                                • You need to uninstall the original version of Car Parking Multiplayer if you have it on your device.
                                                                                                                                                • -
                                                                                                                                                • You need to have enough storage space on your device to download and install the APK mod file.
                                                                                                                                                • -
                                                                                                                                                • You need to be careful when downloading and installing the APK mod file from third-party sources, as they may contain viruses or malware that can harm your device or steal your data.
                                                                                                                                                • -
                                                                                                                                                • You need to be aware that playing the APK mod version of the game may violate the terms and conditions of the original game developer and may result in a ban or suspension of your account.
                                                                                                                                                • -
                                                                                                                                                -

                                                                                                                                                Steps to Download and Install

                                                                                                                                                -
                                                                                                                                                  -
                                                                                                                                                1. Go to a reliable website that offers the APK mod file for Car Parking Multiplayer, such as [APKPure] or [APKDone].
                                                                                                                                                2. -
                                                                                                                                                3. Click on the download button and wait for the file to be downloaded on your device.
                                                                                                                                                4. -
                                                                                                                                                5. Locate the file in your device's file manager and tap on it to start the installation process.
                                                                                                                                                6. -
                                                                                                                                                7. Follow the instructions on the screen and grant the necessary permissions to the app.
                                                                                                                                                8. -
                                                                                                                                                9. Wait for the installation to be completed and launch the app from your home screen or app drawer.
                                                                                                                                                10. -
                                                                                                                                                11. Enjoy playing APK Mod Car Parking Multiplayer with unlimited money, unlocked cars, free shopping, and more.
                                                                                                                                                12. -
                                                                                                                                                -

                                                                                                                                                Tips and Tricks for APK Mod Car Parking Multiplayer

                                                                                                                                                -

                                                                                                                                                If you want to master APK Mod Car Parking Multiplayer, you need to know some tips and tricks that can help you improve your gameplay and have more fun. Here are some of them:

                                                                                                                                                -

                                                                                                                                                How to Select a Car and a Player

                                                                                                                                                -

                                                                                                                                                The game allows you to select from over 130 car models and 16 player models. You can also customize your car and player according to your preferences. To select a car or a player, you need to do the following:

                                                                                                                                                -
                                                                                                                                                  -
                                                                                                                                                • Tap on the garage icon on the top left corner of the screen.
                                                                                                                                                • -
                                                                                                                                                • Swipe left or right to browse through the available cars or players.
                                                                                                                                                • -
                                                                                                                                                • Tap on the car or player you want to select and confirm your choice.
                                                                                                                                                • -
                                                                                                                                                • Tap on the customize icon on the bottom right corner of the screen to change the appearance of your car or player.
                                                                                                                                                • -
                                                                                                                                                • Tap on the back arrow icon on the top left corner of the screen to return to the game.
                                                                                                                                                • -

                                                                                                                                                How to Drift, Donut, and Burnout

                                                                                                                                                -

                                                                                                                                                The game allows you to perform various driving maneuvers such as drifting, donut, and burnout. These maneuvers can help you earn money, reputation, and fun. To perform these maneuvers, you need to do the following:

                                                                                                                                                -
                                                                                                                                                  -
                                                                                                                                                • Drift: To drift, you need to accelerate your car and turn the steering wheel sharply while pressing the handbrake button. You can also use the clutch and the gear shift buttons to control your car's speed and direction. You can drift on corners, curves, or straight roads.
                                                                                                                                                • -
                                                                                                                                                • Donut: To donut, you need to accelerate your car and turn the steering wheel to one side while pressing the handbrake button. You can also use the clutch and the gear shift buttons to control your car's speed and direction. You can donut on flat surfaces or parking lots.
                                                                                                                                                • -
                                                                                                                                                • Burnout: To burnout, you need to accelerate your car and press the brake button at the same time. You can also use the clutch and the gear shift buttons to control your car's speed and direction. You can burnout on any surface or road.
                                                                                                                                                • -
                                                                                                                                                -

                                                                                                                                                How to Make Money and Buy Businesses

                                                                                                                                                -

                                                                                                                                                The game allows you to make money by completing parking and driving challenges, performing driving maneuvers, racing against other players, or working as a taxi driver, delivery driver, or police officer. You can also make money by selling or exchanging cars with other players.

                                                                                                                                                -

                                                                                                                                                You can use the money you earn to buy businesses in the game. Businesses can help you generate passive income, unlock new cars and features, and increase your reputation. To buy businesses, you need to do the following:

                                                                                                                                                -
                                                                                                                                                  -
                                                                                                                                                • Tap on the map icon on the top right corner of the screen.
                                                                                                                                                • -
                                                                                                                                                • Swipe left or right to browse through the available businesses.
                                                                                                                                                • -
                                                                                                                                                • Tap on the business you want to buy and confirm your purchase.
                                                                                                                                                • -
                                                                                                                                                • Tap on the back arrow icon on the top left corner of the screen to return to the game.
                                                                                                                                                • -
                                                                                                                                                -

                                                                                                                                                Alternatives to APK Mod Car Parking Multiplayer

                                                                                                                                                -

                                                                                                                                                If you are looking for other games that are similar to APK Mod Car Parking Multiplayer, you can try these alternatives:

                                                                                                                                                -

                                                                                                                                                Real Car Parking 2: Driving School 2020

                                                                                                                                                -

                                                                                                                                                This game is a realistic car parking simulation game that offers over 250 car models, 75 parking levels, 4 different camera angles, 3D graphics, realistic sounds, and more. You can also customize your car, join online multiplayer races, or learn how to drive in different weather conditions.

                                                                                                                                                -

                                                                                                                                                Parking Frenzy 2.0: Drive&park

                                                                                                                                                -

                                                                                                                                                This game is a fun and challenging car parking game that offers over 200 levels, 16 different cars, 4 awesome maps, 3D graphics, smooth controls, and more. You can also test your driving skills in various scenarios such as night parking, foggy parking, rainy parking, or snowy parking.

                                                                                                                                                -

                                                                                                                                                Trailer Truck Parking with Real City Traffic Car Driving Sim

                                                                                                                                                -

                                                                                                                                                This game is a realistic truck parking game that offers over 50 levels, 10 different trucks, realistic traffic system, 3D graphics, realistic sounds, and more. You can also drive your truck in different environments such as city, highway, desert, or mountain.

                                                                                                                                                -

                                                                                                                                                Conclusion

                                                                                                                                                -

                                                                                                                                                APK Mod Car Parking Multiplayer is a modified version of the popular driving simulation game Car Parking Multiplayer. The game allows you to experience realistic car parking and driving challenges in various environments and scenarios. You can also explore the open world, interact with other players, customize your cars, and enjoy different modes of gameplay.

                                                                                                                                                -

                                                                                                                                                The APK mod version of the game gives you some advantages over the original version, such as unlimited money, unlocked cars, free shopping, and more. You can use these features to enhance your gameplay and have more fun.

                                                                                                                                                -

                                                                                                                                                If you want to play APK Mod Car Parking Multiplayer, you need to download and install it on your Android device. You also need to follow some requirements and precautions before doing so. You also need to know some tips and tricks that can help you improve your gameplay and have more fun. You can also try some alternatives to APK Mod Car Parking Multiplayer if you are looking for other games that are similar to it.

                                                                                                                                                -

                                                                                                                                                FAQs

                                                                                                                                                -
                                                                                                                                                  -
                                                                                                                                                • Q: Is APK Mod Car Parking Multiplayer safe to download and install?
                                                                                                                                                • -
                                                                                                                                                • A: APK Mod Car Parking Multiplayer is generally safe to download and install from reliable sources such as [APKPure] or [APKDone]. However, you should always be careful when downloading and installing any APK mod file from third -party sources, as they may contain viruses or malware that can harm your device or steal your data. You should also be aware that playing the APK mod version of the game may violate the terms and conditions of the original game developer and may result in a ban or suspension of your account.
                                                                                                                                                • -
                                                                                                                                                • Q: How can I update APK Mod Car Parking Multiplayer?
                                                                                                                                                • -
                                                                                                                                                • A: APK Mod Car Parking Multiplayer is not available on the Google Play Store, so you cannot update it automatically. You need to manually download and install the latest version of the APK mod file from the same source you downloaded it from. You should also backup your game data before updating, as you may lose your progress or settings.
                                                                                                                                                • -
                                                                                                                                                • Q: How can I contact the developer of APK Mod Car Parking Multiplayer?
                                                                                                                                                • -
                                                                                                                                                • A: APK Mod Car Parking Multiplayer is not developed by the original game developer, but by a third-party modder. You can contact the modder through their website or social media accounts, if they have any. However, you should not expect any official support or updates from them.
                                                                                                                                                • -
                                                                                                                                                • Q: How can I play APK Mod Car Parking Multiplayer on PC?
                                                                                                                                                • -
                                                                                                                                                • A: APK Mod Car Parking Multiplayer is designed for Android devices, but you can play it on PC using an Android emulator. An Android emulator is a software that allows you to run Android apps and games on your PC. Some of the popular Android emulators are [BlueStacks], [NoxPlayer], and [MEmu]. You need to download and install an Android emulator on your PC, then download and install APK Mod Car Parking Multiplayer on the emulator.
                                                                                                                                                • -
                                                                                                                                                • Q: How can I share my car designs with other players?
                                                                                                                                                • -
                                                                                                                                                • A: The game allows you to save your car designs and share them with other players. To do this, you need to do the following:
                                                                                                                                                • -
                                                                                                                                                    -
                                                                                                                                                  • Tap on the garage icon on the top left corner of the screen.
                                                                                                                                                  • -
                                                                                                                                                  • Tap on the save icon on the bottom right corner of the screen.
                                                                                                                                                  • -
                                                                                                                                                  • Enter a name for your car design and tap on save.
                                                                                                                                                  • -
                                                                                                                                                  • Tap on the share icon on the bottom right corner of the screen.
                                                                                                                                                  • -
                                                                                                                                                  • Select the platform you want to share your car design on, such as Facebook, Instagram, WhatsApp, or email.
                                                                                                                                                  • -
                                                                                                                                                  • Follow the instructions on the screen to complete the sharing process.
                                                                                                                                                  • -
                                                                                                                                                  -

                                                                                                                                                197e85843d
                                                                                                                                                -
                                                                                                                                                -
                                                                                                                                                \ No newline at end of file diff --git a/spaces/fatiXbelha/sd/Download GTA 6 PPSSPP Zip File for Android - Free APK OBB ISO CSO.md b/spaces/fatiXbelha/sd/Download GTA 6 PPSSPP Zip File for Android - Free APK OBB ISO CSO.md deleted file mode 100644 index fa04747f621eab497099179b45e47fa843eb9453..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Download GTA 6 PPSSPP Zip File for Android - Free APK OBB ISO CSO.md +++ /dev/null @@ -1,82 +0,0 @@ -
                                                                                                                                                -

                                                                                                                                                GTA 6: Everything You Need to Know

                                                                                                                                                -

                                                                                                                                                -

                                                                                                                                                ppsspp gta 6 zip file download android apk


                                                                                                                                                Download File ->>->>->> https://urllie.com/2uNBr7



                                                                                                                                                -

                                                                                                                                                In this article, I will try to answer some of the most common questions and queries about GTA 6, based on the information I have gathered from various sources. I will also provide you with some tips and tricks on how to enjoy GTA 6 on your Android device, if and when it becomes available. Please note that this article is not endorsed by Rockstar Games or any other official entity, and it is based on my own research and analysis. Therefore, some of the information may be inaccurate or outdated. Please use your own judgment and discretion when reading this article.

                                                                                                                                                -

                                                                                                                                                Here are the topics I will cover in this article:

                                                                                                                                                -
                                                                                                                                                  -
                                                                                                                                                • GTA 6 release date and platforms
                                                                                                                                                • -
                                                                                                                                                • GTA 6 trailer and gameplay
                                                                                                                                                • -
                                                                                                                                                • GTA 6 setting and map
                                                                                                                                                • -
                                                                                                                                                • GTA 6 characters and story
                                                                                                                                                • -
                                                                                                                                                • GTA 6 features and innovations
                                                                                                                                                • -
                                                                                                                                                • GTA 6 system requirements and download
                                                                                                                                                • -
                                                                                                                                                • GTA 6 android gameplay and review
                                                                                                                                                • -
                                                                                                                                                • GTA 6 android tips and tricks
                                                                                                                                                • -
                                                                                                                                                • Conclusion
                                                                                                                                                • -
                                                                                                                                                • FAQs
                                                                                                                                                • -
                                                                                                                                                -

                                                                                                                                                GTA 6 release date and platforms

                                                                                                                                                -

                                                                                                                                                One of the most frequently asked questions about GTA 6 is when it will be released and on which platforms. Unfortunately, there is no definitive answer to this question yet, as Rockstar Games has not officially announced or confirmed anything about GTA 6's release date or platforms.

                                                                                                                                                -

                                                                                                                                                However, based on some reports, leaks, and speculations, we can make some educated guesses. According to a report by Bloomberg, published back in June 2022, current and former Rockstar staff reckoned GTA 6 was still at least two years away from release, suggesting a 2024/2025 launch. Similarly, according to a leak by Tom Henderson, a reputable insider who has accurately predicted details about other games such as Call of Duty and Battlefield, GTA 6 is expected to be released in 2024 or later, due to various factors such as Covid-19 pandemic, employee well-being, next-gen console availability, and game quality.

                                                                                                                                                -

                                                                                                                                                As for the platforms, it's safe to assume that GTA 6 will be available on PS5 and Xbox Series X|S at launch, as these are the latest generation of consoles that can handle the high-end graphics and performance that GTA 6 will likely require. However, older consoles such as PS4 and Xbox One are much less likely to support GTA 6, depending on when it comes out. In the past, Rockstar Games has released some of its games such as Red Dead Redemption 2 on both current-gen (at the time) and previous-gen consoles, but this may not be feasible for GTA 6 due to its expected size and scope.

                                                                                                                                                GTA 6 android gameplay and review -

                                                                                                                                                Another question that many GTA fans have is whether GTA 6 will be available on Android devices. The answer is not so simple, as there are many factors to consider. First of all, GTA 6 is not officially announced or confirmed for any platform yet, so we don't know for sure if Rockstar Games plans to release it on mobile devices at all. Secondly, even if GTA 6 does come to Android, it will likely take a long time after the console and PC release, as it was the case with previous GTA games. For example, GTA 5 was released in 2013 for PS3 and Xbox 360, in 2014 for PS4 and Xbox One, in 2015 for PC, and in 2021 for PS5 and Xbox Series X|S. However, GTA 5 never came to Android or iOS devices, despite the high demand from fans. The closest thing we have is GTA 5 Online, which can be played on Android devices using cloud gaming services like Google Stadia or NVIDIA GeForce Now.

                                                                                                                                                -

                                                                                                                                                On the other hand, some older GTA games did make their way to Android devices, such as GTA 3, GTA Vice City, GTA San Andreas, GTA Liberty City Stories, and GTA Chinatown Wars. These games were ported to Android with improved graphics, controls, and features, and received mostly positive reviews from critics and players. However, these games are much smaller and simpler than GTA 5 or GTA 6, so they are easier to adapt to mobile devices. Therefore, we can't expect the same level of quality and performance from GTA 6 on Android as we would from GTA 6 on consoles or PC.

                                                                                                                                                -

                                                                                                                                                That being said, there is still a possibility that GTA 6 will come to Android devices in some form or another. Maybe Rockstar Games will surprise us with a full-fledged port of GTA 6 on Android, or maybe they will release a spin-off game or a companion app that connects to the main game. Maybe they will use cloud gaming technology to stream GTA 6 to Android devices without compromising the graphics or gameplay. Or maybe they will do something completely different that we can't even imagine right now. The point is, we don't know what Rockstar Games has in store for us with GTA 6 on Android, but we can hope that they will deliver something amazing and satisfying for the millions of fans who want to play GTA 6 on their mobile devices.

                                                                                                                                                -

                                                                                                                                                Until then, we can only rely on the leaked footage and screenshots of GTA 6 that surfaced online in September 2022. These leaks showed us some glimpses of the gameplay and graphics of GTA 6 on PS5 and Xbox Series X|S, and they looked stunning. The leaks revealed that GTA 6 will take place in a modern-day version of Vice City (Miami), with two main protagonists: a male named Jason and a female named Lucia. The leaks also showed some of the features and innovations that GTA 6 will bring to the series, such as dynamic weather effects, realistic traffic behavior, interactive environments, improved combat mechanics, and more.

                                                                                                                                                -

                                                                                                                                                ppsspp gta 6 iso free download for android
                                                                                                                                                -gta 6 ppsspp gold emulator apk download
                                                                                                                                                -how to download gta 6 zip file on android ppsspp
                                                                                                                                                -gta 6 ppsspp highly compressed zip file android
                                                                                                                                                -gta 6 ppsspp android apk obb iso cso download
                                                                                                                                                -ppsspp gta 6 zip file download android offline
                                                                                                                                                -gta 6 ppsspp android apk mod zip file
                                                                                                                                                -ppsspp gta 6 zip file download android no verification
                                                                                                                                                -gta 6 ppsspp android apk data zip file
                                                                                                                                                -ppsspp gta 6 zip file download android latest version
                                                                                                                                                -gta 6 ppsspp android apk full zip file
                                                                                                                                                -ppsspp gta 6 zip file download android online
                                                                                                                                                -gta 6 ppsspp android apk cheats zip file
                                                                                                                                                -ppsspp gta 6 zip file download android free
                                                                                                                                                -gta 6 ppsspp android apk settings zip file
                                                                                                                                                -ppsspp gta 6 zip file download android gameplay
                                                                                                                                                -gta 6 ppsspp android apk graphics zip file
                                                                                                                                                -ppsspp gta 6 zip file download android requirements
                                                                                                                                                -gta 6 ppsspp android apk update zip file
                                                                                                                                                -ppsspp gta 6 zip file download android tutorial
                                                                                                                                                -gta 6 ppsspp android apk review zip file
                                                                                                                                                -ppsspp gta 6 zip file download android best site
                                                                                                                                                -gta 6 ppsspp android apk link zip file
                                                                                                                                                -ppsspp gta 6 zip file download android reddit
                                                                                                                                                -gta 6 ppsspp android apk youtube zip file

                                                                                                                                                -

                                                                                                                                                Based on these leaks, we can say that GTA 6 will be a huge leap forward for the Grand Theft Auto series, and it will offer an immersive and thrilling experience for the players. However, these leaks are not official or confirmed by Rockstar Games, so we have to take them with a grain of salt. They may not represent the final version of the game, or they may be fake or altered. Therefore, we have to wait for Rockstar Games to officially announce and reveal GTA 6 before we can judge it properly.

                                                                                                                                                -

                                                                                                                                                In conclusion, GTA 6 is one of the most anticipated games of all time, and it will likely be a masterpiece of gaming when it comes out. However, we don't know if it will come out on Android devices or not, and if it does, how it will look and play on them. We can only hope that Rockstar Games will surprise us with something amazing for Android users who love GTA games.

                                                                                                                                                GTA 6 android tips and tricks

                                                                                                                                                -

                                                                                                                                                If you are lucky enough to play GTA 6 on your Android device, either through a port, a spin-off, a companion app, or a cloud gaming service, you may want to know some tips and tricks to make the most of your experience. Here are some of the best ones I have found:

                                                                                                                                                -
                                                                                                                                                  -
                                                                                                                                                • Optimize your device settings. Before you launch GTA 6 on your Android device, make sure you have enough battery, storage, and RAM available. You may also want to turn off any unnecessary apps or features that may drain your resources or interfere with your gameplay. For example, you can turn on airplane mode, disable notifications, lower your screen brightness, and close any background apps. You can also use a game booster app to optimize your device's performance and enhance your gaming experience.
                                                                                                                                                • -
                                                                                                                                                • Adjust your game settings. Once you start GTA 6 on your Android device, you may want to tweak some of the game settings to suit your preferences and needs. For example, you can change the graphics quality, the sound volume, the control layout, the camera angle, and the difficulty level. You can also enable or disable some of the game features, such as subtitles, auto-aim, vibration, and online mode. You can access the game settings by pausing the game and tapping on the Settings icon.
                                                                                                                                                • -
                                                                                                                                                • Use cheats and mods. If you want to spice up your GTA 6 gameplay on your Android device, you can use some of the cheats and mods that are available online. Cheats are codes that you can enter in the game to activate various effects, such as invincibility, unlimited money, weapons, vehicles, and more. Mods are modifications that you can install in the game to change or add new content, such as skins, maps, missions, characters, and more. However, be careful when using cheats and mods, as they may cause glitches, crashes, or bans from the game. Also, make sure you download them from trusted sources and scan them for viruses or malware.
                                                                                                                                                • -
                                                                                                                                                • Explore the map. One of the best things about GTA 6 is its huge and detailed map that covers a modern-day version of Vice City (Miami) and its surroundings. The map is full of places to visit, activities to do, secrets to discover, and people to interact with. You can explore the map by driving, flying, swimming, walking, or using public transportation. You can also use the map menu to see your current location, your objectives, your waypoints, and other points of interest. You can zoom in or out of the map by pinching the screen.
                                                                                                                                                • -
                                                                                                                                                • Complete the missions. The main way to progress in GTA 6 is by completing the missions that are given to you by various characters in the game. The missions are divided into main missions and side missions. Main missions advance the main story of the game and involve the two protagonists: Jason and Lucia. Side missions are optional and involve other characters or activities in the game. You can choose which missions to accept or decline by using your phone in the game. You can also replay any mission that you have completed by using the mission menu.
                                                                                                                                                • -
                                                                                                                                                -

                                                                                                                                                These are just some of the tips and tricks that I have learned from playing GTA 6 on my Android device. There are many more that you can find online or by experimenting with the game yourself. I hope you enjoy playing GTA 6 on your Android device as much as I do.

                                                                                                                                                Conclusion

                                                                                                                                                -

                                                                                                                                                In this article, I have tried to answer some of the most common questions and queries about GTA 6, the upcoming game from Rockstar Games. I have also provided you with some tips and tricks on how to enjoy GTA 6 on your Android device, if and when it becomes available. I hope you have found this article helpful and informative, and I hope you are as excited as I am for GTA 6.

                                                                                                                                                -

                                                                                                                                                GTA 6 is one of the most anticipated games of all time, and it will likely be a masterpiece of gaming when it comes out. However, we don't know much about it yet, as Rockstar Games has not officially announced or confirmed anything about it. Therefore, we have to be patient and wait for more news and updates from Rockstar Games. Until then, we can enjoy the previous GTA games on our Android devices, or play other games that are similar to GTA, such as Gangstar Vegas, Payback 2, or Grand Theft Auto: San Andreas.

                                                                                                                                                -

                                                                                                                                                Thank you for reading this article, and please share it with your friends who are also interested in GTA 6. If you have any questions, comments, or feedback about this article or GTA 6 in general, please feel free to leave them below. I will try to answer them as soon as possible. Also, if you want to read more articles like this one, please follow me on Bing and check out my other articles. I write about various topics related to gaming, technology, entertainment, and more.

                                                                                                                                                -

                                                                                                                                                Have a great day, and happy gaming!

                                                                                                                                                -

                                                                                                                                                FAQs

                                                                                                                                                -

                                                                                                                                                Here are some of the frequently asked questions about GTA 6 that I have collected from various sources:

                                                                                                                                                -
                                                                                                                                                  -
                                                                                                                                                1. Is GTA 6 confirmed?
                                                                                                                                                2. -

                                                                                                                                                  No, GTA 6 is not confirmed by Rockstar Games yet. However, there are many rumors and leaks that suggest that GTA 6 is in development and will be released in the future.

                                                                                                                                                  -
                                                                                                                                                3. When will GTA 6 be released?
                                                                                                                                                4. -

                                                                                                                                                  We don't know for sure when GTA 6 will be released, as Rockstar Games has not announced or confirmed anything about it. However, based on some reports, leaks, and speculations, we can expect GTA 6 to be released in 2024 or later.

                                                                                                                                                  -
                                                                                                                                                5. Where will GTA 6 take place?
                                                                                                                                                6. -

                                                                                                                                                  We don't know for sure where GTA 6 will take place, as Rockstar Games has not announced or confirmed anything about it. However, based on some leaks and rumors, we can expect GTA 6 to take place in a modern-day version of Vice City (Miami) and its surroundings.

                                                                                                                                                  -
                                                                                                                                                7. Who will be the main characters of GTA 6?
                                                                                                                                                8. -

                                                                                                                                                  We don't know for sure who will be the main characters of GTA 6, as Rockstar Games has not announced or confirmed anything about it. However, based on some leaks and rumors, we can expect GTA 6 to have two main protagonists: a male named Jason and a female named Lucia.

                                                                                                                                                  -
                                                                                                                                                9. How can I play GTA 6 on my Android device?
                                                                                                                                                10. -

                                                                                                                                                  We don't know for sure if GTA 6 will be available on Android devices or not, as Rockstar Games has not announced or confirmed anything about it. However, if GTA 6 does come to Android devices in some form or another, you may be able to play it by using a port, a spin-off, a companion app, or a cloud gaming service.

                                                                                                                                                  -

                                                                                                                                                197e85843d
                                                                                                                                                -
                                                                                                                                                -
                                                                                                                                                \ No newline at end of file diff --git a/spaces/fatiXbelha/sd/Download Royal Battletown Mod APK with Happymod - The Best Way to Enjoy a Stylized City.md b/spaces/fatiXbelha/sd/Download Royal Battletown Mod APK with Happymod - The Best Way to Enjoy a Stylized City.md deleted file mode 100644 index 30ddb15ddffcb9297cbb09c3d446ce2375a94555..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Download Royal Battletown Mod APK with Happymod - The Best Way to Enjoy a Stylized City.md +++ /dev/null @@ -1,135 +0,0 @@ -
                                                                                                                                                -

                                                                                                                                                Royal Battletown Mod APK Download Happymod: A Fun and Exciting Action Game

                                                                                                                                                -

                                                                                                                                                If you are looking for a free action game that offers you daily fun and adventure in a large open world with destructible objects and cartoon graphics, then you should try Royal Battletown. This game has everything you need to have fun, from riding around the city on a skateboard or bicycle, to jumping from helicopters and planes, to fighting zombies and enemies. You can do anything in this game, as it has a big world with interesting game mechanics.

                                                                                                                                                -

                                                                                                                                                royal battletown mod apk download happymod


                                                                                                                                                DOWNLOAD ✦✦✦ https://urllie.com/2uNEVX



                                                                                                                                                -

                                                                                                                                                But what if you want to enjoy the game even more, without any limitations or interruptions? Well, then you should download the Royal Battletown Mod APK from Happymod. This modded version of the game will give you unlimited money and resources, unlock all the items and abilities, and remove all the ads and in-app purchases. You will be able to play the game as you wish, without any restrictions or hassles.

                                                                                                                                                -

                                                                                                                                                In this article, we will tell you more about Royal Battletown, its features, its mod APK, how to download and install it, some tips and tricks for playing it, and a review of the game. Read on to find out more.

                                                                                                                                                -

                                                                                                                                                What is Royal Battletown?

                                                                                                                                                -

                                                                                                                                                Royal Battletown is an action game developed by Naxeex LLC, a studio that specializes in creating open world games with RPG elements. The game was released in 2022 for Android devices, and has since gained over 10 million downloads on Google Play Store. The game has a rating of 3.8 out of 5 stars, based on over 35 thousand reviews.

                                                                                                                                                -

                                                                                                                                                Features of Royal Battletown

                                                                                                                                                -

                                                                                                                                                Royal Battletown has many features that make it a fun and exciting game to play. Here are some of them:

                                                                                                                                                -

                                                                                                                                                royal battletown mod apk unlimited money
                                                                                                                                                -royal battletown mod apk latest version
                                                                                                                                                -royal battletown mod apk happymod download free
                                                                                                                                                -royal battletown mod apk android 1
                                                                                                                                                -royal battletown mod apk 3.7.5
                                                                                                                                                -royal battletown mod apk revdl
                                                                                                                                                -royal battletown mod apk rexdl
                                                                                                                                                -royal battletown mod apk offline
                                                                                                                                                -royal battletown mod apk no ads
                                                                                                                                                -royal battletown mod apk 2023
                                                                                                                                                -royal battletown hack mod apk download
                                                                                                                                                -royal battletown cheat mod apk download
                                                                                                                                                -royal battletown premium mod apk download
                                                                                                                                                -royal battletown pro mod apk download
                                                                                                                                                -royal battletown full mod apk download
                                                                                                                                                -royal battletown mega mod apk download
                                                                                                                                                -royal battletown vip mod apk download
                                                                                                                                                -royal battletown unlocked mod apk download
                                                                                                                                                -royal battletown cracked mod apk download
                                                                                                                                                -royal battletown patched mod apk download
                                                                                                                                                -download royal battletown mod apk from happymod.com
                                                                                                                                                -download royal battletown mod apk from m.happymod.com
                                                                                                                                                -download royal battletown mod apk from happymod.io
                                                                                                                                                -download royal battletown mod apk from happymod.co.uk
                                                                                                                                                -download royal battletown mod apk from happymod.net
                                                                                                                                                -how to download royal battletown mod apk happymod
                                                                                                                                                -how to install royal battletown mod apk happymod
                                                                                                                                                -how to play royal battletown mod apk happymod
                                                                                                                                                -how to update royal battletown mod apk happymod
                                                                                                                                                -how to uninstall royal battletown mod apk happymod
                                                                                                                                                -naxeex llc royal battletown mod apk download happymod
                                                                                                                                                -naxeex studio royal battletown mod apk download happymod
                                                                                                                                                -naxeex games royal battletown mod apk download happymod
                                                                                                                                                -naxeex corporation royal battletown mod apk download happymod
                                                                                                                                                -naxeex ltd. royal battletown mod apk download happymod
                                                                                                                                                -action game royal battletown mod apk download happymod
                                                                                                                                                -adventure game royal battletown mod apk download happymod
                                                                                                                                                -simulation game royal battletown mod apk download happymod
                                                                                                                                                -open world game royal battletown mod apk download happymod
                                                                                                                                                -sandbox game royal battletown mod apk download happymod

                                                                                                                                                -

                                                                                                                                                Open world with destructible objects and cartoon graphics

                                                                                                                                                -

                                                                                                                                                The game takes place in a bright stylized city that keeps a lot of secrets. You can explore the city and find free money, ammo, first-aid kits, weapons, and other useful things. You can also destroy objects like cars, buildings, fences, etc., with your weapons or abilities. The game has modern graphics and good optimization, so it will work even on weak devices.

                                                                                                                                                -

                                                                                                                                                Various quests, mini games, and battles

                                                                                                                                                -

                                                                                                                                                The game has many

                                                                                                                                                The game has many quests and missions that you can complete to earn money and reputation. You can also play mini games like racing, shooting, parkour, etc., to have fun and test your skills. You can also fight against other players or NPCs in different modes, such as deathmatch, team deathmatch, capture the flag, etc. You can use various weapons and vehicles to gain an advantage in battles.

                                                                                                                                                -

                                                                                                                                                Special abilities and rewards

                                                                                                                                                -

                                                                                                                                                The game allows you to use special abilities that can help you in different situations. You can use super speed, super jump, invisibility, telekinesis, etc., to escape from enemies, reach high places, or move objects. You can also use magic spells like fireballs, lightning bolts, ice shards, etc., to attack your foes. You can unlock new abilities and spells by completing quests and achievements. You can also get rewards like money, weapons, vehicles, clothes, etc., by playing the game regularly.

                                                                                                                                                -

                                                                                                                                                Customization and arsenal

                                                                                                                                                -

                                                                                                                                                The game lets you customize your character and your arsenal according to your preferences. You can change your appearance by choosing different clothes, hairstyles, tattoos, etc. You can also upgrade your weapons and vehicles by adding attachments, skins, stickers, etc. You can choose from a wide range of weapons and vehicles, such as pistols, rifles, shotguns, grenades, rockets, bikes, cars, tanks, helicopters, planes, etc.

                                                                                                                                                -

                                                                                                                                                What is Royal Battletown Mod APK?

                                                                                                                                                -

                                                                                                                                                Royal Battletown Mod APK is a modified version of the original game that gives you some extra features and benefits that are not available in the official version. The mod APK is created by third-party developers who modify the game files to enhance the gameplay experience.

                                                                                                                                                -

                                                                                                                                                Benefits of Royal Battletown Mod APK

                                                                                                                                                -

                                                                                                                                                Royal Battletown Mod APK has many benefits that make it a better choice than the original game. Here are some of them:

                                                                                                                                                -

                                                                                                                                                Unlimited money and resources

                                                                                                                                                -

                                                                                                                                                The mod APK gives you unlimited money and resources that you can use to buy anything you want in the game. You don't have to worry about running out of money or resources while playing the game. You can buy any weapon, vehicle, item, ability, or spell you want without any limitations.

                                                                                                                                                -

                                                                                                                                                All items and abilities unlocked

                                                                                                                                                -

                                                                                                                                                The mod APK unlocks all the items and abilities that are otherwise locked or require a certain level or achievement to unlock in the original game. You don't have to wait or work hard to unlock them in the mod APK. You can access any item or ability you want from the start of the game.

                                                                                                                                                -

                                                                                                                                                No ads and in-app purchases

                                                                                                                                                -

                                                                                                                                                The mod APK removes all the ads and in-app purchases that are present in the original game. You don't have to watch annoying ads or spend real money to buy anything in the game. You can enjoy the game without any interruptions or distractions.

                                                                                                                                                -

                                                                                                                                                How to download and install Royal Battletown Mod APK?

                                                                                                                                                -

                                                                                                                                                If you want to download and install Royal Battletown Mod APK on your Android device, you need to follow some simple steps. Here are they:

                                                                                                                                                -

                                                                                                                                                Steps to download and install Royal Battletown Mod APK

                                                                                                                                                -

                                                                                                                                                Enable unknown sources

                                                                                                                                                -

                                                                                                                                                Before you download the mod APK file, you need to enable unknown sources on your device. This will allow you to install apps from sources other than Google Play Store. To enable unknown sources, go to Settings > Security > Unknown Sources and toggle it on.

                                                                                                                                                -

                                                                                                                                                Download the mod APK file from a trusted source

                                                                                                                                                -

                                                                                                                                                Next, you need to download the mod APK file from a trusted source. There are many websites that offer mod APK files for various games and apps, but not all of them are safe and reliable. Some of them may contain viruses or malware that can harm your device or steal your data. Therefore, you need to be careful while choosing a source for downloading the mod APK file.

                                                                                                                                                -

                                                                                                                                                One of the best sources for downloading Royal Battletown Mod APK is Happymod. Happymod is a popular platform that provides modded versions of various games and apps for free. All the mod APK files on Happymod are tested and verified by users and moderators before they are uploaded on the website. You can download Royal Battletown Mod APK from Happymod by following this link: [text].

                                                                                                                                                -

                                                                                                                                                Install the mod APK file

                                                                                                                                                -

                                                                                                                                                After you download the mod APK file from Happymod, you need to install it on your device. To do so, locate the downloaded file in your file manager and tap on it. A pop-up will appear asking you to confirm the installation. Tap on Install and wait

                                                                                                                                                After you download the mod APK file from Happymod, you need to install it on your device. To do so, locate the downloaded file in your file manager and tap on it. A pop-up will appear asking you to confirm the installation. Tap on Install and wait for the installation process to finish. You may need to grant some permissions to the app during the installation.

                                                                                                                                                -

                                                                                                                                                Enjoy the game

                                                                                                                                                -

                                                                                                                                                Once the installation is done, you can launch the game from your app drawer or home screen. You will see a mod menu on the screen that will allow you to enable or disable various features of the mod APK. You can also access the settings and customize the game according to your preferences. You can now enjoy playing Royal Battletown with unlimited money, resources, items, abilities, and no ads.

                                                                                                                                                -

                                                                                                                                                Tips and tricks for playing Royal Battletown

                                                                                                                                                -

                                                                                                                                                Royal Battletown is a fun and exciting game that can keep you entertained for hours. However, if you want to play the game more effectively and have more fun, you can follow some tips and tricks that we have gathered for you. Here are some of them:

                                                                                                                                                -

                                                                                                                                                How to play Royal Battletown effectively

                                                                                                                                                -

                                                                                                                                                Explore the city and collect useful items

                                                                                                                                                -

                                                                                                                                                The city of Royal Battletown is full of surprises and secrets. You can find many useful items scattered around the city, such as money, ammo, first-aid kits, weapons, vehicles, etc. You can also find hidden locations and easter eggs that can give you extra rewards or fun experiences. You should explore the city as much as possible and collect everything you can find. This will help you in your quests and battles, as well as make the game more enjoyable.

                                                                                                                                                -

                                                                                                                                                Complete quests and achievements

                                                                                                                                                -

                                                                                                                                                The game has many quests and achievements that you can complete to earn money and reputation. You can find quests from different characters or locations in the city, such as police stations, bars, casinos, etc. You can also find quests from random events that happen in the city, such as robberies, car chases, gang wars, etc. You should complete as many quests as you can, as they will give you rewards and unlock new features and items in the game.

                                                                                                                                                -

                                                                                                                                                The game also has many achievements that you can unlock by performing certain actions or tasks in the game. For example, you can unlock achievements by killing a certain number of enemies, using a certain weapon or vehicle, destroying a certain object, etc. You should try to unlock as many achievements as you can, as they will give you bonuses and bragging rights.

                                                                                                                                                -

                                                                                                                                                Upgrade your character and abilities

                                                                                                                                                -

                                                                                                                                                The game allows you to upgrade your character and abilities by spending money and resources. You can upgrade your character's attributes, such as health, stamina, strength, speed, etc., by buying clothes, accessories, tattoos, etc. You can also upgrade your abilities by buying new spells or enhancing existing ones. You should upgrade your character and abilities as much as possible, as they will make you stronger and more powerful in the game.

                                                                                                                                                -

                                                                                                                                                Use different weapons and vehicles

                                                                                                                                                -

                                                                                                                                                The game offers you a wide range of weapons and vehicles that you can use in different situations. You can choose from pistols, rifles, shotguns, grenades, rockets, bikes, cars

                                                                                                                                                The game offers you a wide range of weapons and vehicles that you can use in different situations. You can choose from pistols, rifles, shotguns, grenades, rockets, bikes, cars, tanks, helicopters, planes, etc. You can also find unique and rare weapons and vehicles in the game, such as laser guns, jetpacks, UFOs, etc. You should use different weapons and vehicles depending on the scenario and your preference. You should also experiment with different combinations and see what works best for you.

                                                                                                                                                -

                                                                                                                                                Review of Royal Battletown

                                                                                                                                                -

                                                                                                                                                Royal Battletown is a game that has a lot of potential and appeal for fans of action and open world games. However, it also has some flaws and drawbacks that may affect the overall enjoyment of the game. Here is a review of the game based on its pros and cons, as well as user ratings and feedback.

                                                                                                                                                -

                                                                                                                                                Pros and cons of Royal Battletown

                                                                                                                                                -

                                                                                                                                                Royal Battletown has many pros and cons that make it a mixed bag of a game. Here are some of them:

                                                                                                                                                -

                                                                                                                                                Pros: fun, addictive, colorful, diverse

                                                                                                                                                -

                                                                                                                                                The game is fun and addictive to play, as it offers you a lot of freedom and variety in what you can do in the game. You can have fun exploring the city, completing quests, playing mini games, fighting enemies, using abilities, etc. The game is also colorful and vibrant, with cartoon graphics and humorous elements. The game is also diverse and rich in content, with many weapons, vehicles, items, abilities, spells, characters, locations, etc.

                                                                                                                                                -

                                                                                                                                                Cons: buggy, repetitive, unrealistic

                                                                                                                                                -

                                                                                                                                                The game is buggy and glitchy, as it has many errors and problems that can ruin the gameplay experience. For example, the game may crash or freeze randomly, the graphics may be distorted or pixelated, the controls may be unresponsive or inaccurate, the physics may be inconsistent or unrealistic, etc. The game is also repetitive and boring after a while, as it has many similar quests and missions that lack originality and creativity. The game is also unrealistic and absurd in many aspects, such as the damage system, the enemy AI, the dialogue, etc.

                                                                                                                                                -

                                                                                                                                                User ratings and feedback

                                                                                                                                                -

                                                                                                                                                The game has an average rating of 3.8 out of 5 stars on Google Play Store,

                                                                                                                                                The game has an average rating of 3.8 out of 5 stars on Google Play Store, based on over 35 thousand reviews. The game has received mixed feedback from the users, who have praised and criticized different aspects of the game. Here are some of the user reviews:

                                                                                                                                                -

                                                                                                                                                Positive reviews: praise the graphics, gameplay, and humor

                                                                                                                                                -

                                                                                                                                                Some of the positive reviews are:

                                                                                                                                                -
                                                                                                                                                  -
                                                                                                                                                • "This game is awesome. The graphics are amazing and the gameplay is smooth and fun. I love the humor and the references in the game. It's like GTA but with more freedom and craziness. I recommend this game to anyone who likes action and open world games."
                                                                                                                                                • -
                                                                                                                                                • "I really enjoy this game. It has a lot of features and options that make it interesting and entertaining. The graphics are colorful and cartoonish, which I like. The gameplay is addictive and challenging. The humor is hilarious and witty. It's a great game to play when you are bored or stressed."
                                                                                                                                                • -
                                                                                                                                                • "This game is very fun and exciting. It has a lot of things to do and explore in the city. The graphics are nice and the sound effects are good. The gameplay is fast and dynamic. The humor is funny and clever. It's a game that makes you laugh and have fun."
                                                                                                                                                • -
                                                                                                                                                -

                                                                                                                                                Negative reviews: complain about the glitches, ads, and controls

                                                                                                                                                -

                                                                                                                                                Some of the negative reviews are:

                                                                                                                                                -
                                                                                                                                                  -
                                                                                                                                                • "This game is terrible. The graphics are awful and the gameplay is buggy and glitchy. I hate the ads and the in-app purchases that ruin the game. The controls are hard and unresponsive. The game is unrealistic and stupid. It's a waste of time and space."
                                                                                                                                                • -
                                                                                                                                                • "I don't like this game. The graphics are ugly and the gameplay is boring and repetitive. I can't stand the ads and the in-app purchases that pop up every time I play. The controls are confusing and inaccurate. The game is too easy and too hard at the same time. It's a bad game."
                                                                                                                                                • -
                                                                                                                                                • "This game is disappointing. The graphics are mediocre and the gameplay is laggy and unstable. I get annoyed by the ads and the in-app purchases that force me to spend money on the game. The controls are awkward and frustrating. The game is unrealistic and ridiculous. It's not fun at all."
                                                                                                                                                • -
                                                                                                                                                -

                                                                                                                                                Conclusion

                                                                                                                                                -

                                                                                                                                                Royal Battletown is a game that can be enjoyed by anyone who likes action and open world games with cartoon graphics and humorous elements. The game offers you a lot of freedom and variety in what you can do in the game, from exploring the city, completing quests, playing mini games, fighting enemies, using abilities, etc.

                                                                                                                                                -

                                                                                                                                                However, the game also has some flaws and drawbacks that may affect your enjoyment of the game, such as bugs, glitches, ads, in-app purchases, controls, etc.

                                                                                                                                                -

                                                                                                                                                If you want to play the game without any limitations or interruptions, you can download the Royal Battletown Mod APK from Happymod, which will give you unlimited money, resources, items, abilities, etc., as well as remove all the ads and in-app purchases from the game.

                                                                                                                                                -

                                                                                                                                                Whether you play the original or modded version of Royal Battletown, we hope that you have fun playing it.

                                                                                                                                                -

                                                                                                                                                FAQs

                                                                                                                                                -

                                                                                                                                                Here are some frequently asked questions about Royal Battletown:

                                                                                                                                                -

                                                                                                                                                Q: Is Royal Battletown safe to play?

                                                                                                                                                -

                                                                                                                                                A: Royal Battletown is safe to play as long as you download it from a trusted source like Google Play Store or Happymod.

                                                                                                                                                -

                                                                                                                                                Q: Is Royal Battletown online or offline?

                                                                                                                                                -

                                                                                                                                                A: Royal Battletown can be played both online and offline.

                                                                                                                                                -

                                                                                                                                                Q: How can I get more money in Royal Battletown?

                                                                                                                                                -

                                                                                                                                                A: You can get more money in Royal Battletown by completing quests, achievements, mini games, battles, etc., or by downloading the mod APK from Happymod.

                                                                                                                                                -

                                                                                                                                                Q: How can I change my character's appearance in Royal Battletown?

                                                                                                                                                -

                                                                                                                                                A: You can change your character's appearance in Royal Battletown by buying clothes, accessories, tattoos, etc., from different shops in the city.

                                                                                                                                                -

                                                                                                                                                Q: How can I use magic spells in Royal Battletown?

                                                                                                                                                -

                                                                                                                                                A: You can use magic spells in Royal Battletown by buying them from different vendors in the city or by unlocking them by completing quests or achievements.

                                                                                                                                                401be4b1e0
                                                                                                                                                -
                                                                                                                                                -
                                                                                                                                                \ No newline at end of file diff --git a/spaces/fatiXbelha/sd/Enjoy the Stunning 3D Graphics and Sound of Air Attack HD APK on Android.md b/spaces/fatiXbelha/sd/Enjoy the Stunning 3D Graphics and Sound of Air Attack HD APK on Android.md deleted file mode 100644 index 3d5b51264b6b5bea6d61889a1f45761e6b5919d7..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Enjoy the Stunning 3D Graphics and Sound of Air Attack HD APK on Android.md +++ /dev/null @@ -1,103 +0,0 @@ -
                                                                                                                                                -

                                                                                                                                                Air Attack Old Version APK: A Classic WW2 Shooter Game

                                                                                                                                                -

                                                                                                                                                If you are a fan of WW2 air combat games, you might want to check out Air Attack Old Version APK. This is a retro-style shooter game that lets you fly various planes and fight against hordes of enemies in 10 exciting missions. You can also upgrade your weapons and special abilities, and face huge bosses at the end of each level. In this article, we will tell you what Air Attack Old Version APK is, how to download and install it, why you should play it, and some tips and tricks to help you master it. We will also list the pros and cons of this game, and answer some frequently asked questions.

                                                                                                                                                -

                                                                                                                                                What is Air Attack Old Version APK?

                                                                                                                                                -

                                                                                                                                                Air Attack Old Version APK is an Android game that was released in 2010 by Art In Games. It is a 3D scrolling shooter game that simulates WW2 air combat scenarios. You can choose from three different planes, each with its own characteristics and abilities. You can also customize your plane with various weapons and special items, such as bombs, rockets, lasers, shields, and more. The game has 10 missions, each with a different setting and objective. You will face 64 types of enemies, ranging from fighter jets, tanks, ships, submarines, to giant robots and dragons. The game also has impressive graphics, sound effects, and music that create an immersive atmosphere.

                                                                                                                                                -

                                                                                                                                                air attack old version apk


                                                                                                                                                DOWNLOAD 🌟 https://urllie.com/2uNIAk



                                                                                                                                                -

                                                                                                                                                How to download and install Air Attack Old Version APK?

                                                                                                                                                -

                                                                                                                                                Air Attack Old Version APK is not available on the Google Play Store, so you will need to download it from a third-party source. One of the websites that offer this game is APKCombo. Here are the steps to download and install Air Attack Old Version APK:

                                                                                                                                                -
                                                                                                                                                  -
                                                                                                                                                1. Go to [APKCombo](^1^) and search for "AirAttack HD".
                                                                                                                                                2. -
                                                                                                                                                3. Select the version that you want to download. The latest version is 1.8.1.
                                                                                                                                                4. -
                                                                                                                                                5. Click on the "Download APK" button and wait for the file to be downloaded.
                                                                                                                                                6. -
                                                                                                                                                7. Once the file is downloaded, open it and tap on "Install". You may need to enable "Unknown sources" in your device settings to allow the installation.
                                                                                                                                                8. -
                                                                                                                                                9. After the installation is complete, you can launch the game and enjoy it.
                                                                                                                                                10. -
                                                                                                                                                -

                                                                                                                                                Why play Air Attack Old Version APK?

                                                                                                                                                -

                                                                                                                                                Air Attack Old Version APK is a fun and addictive game that will appeal to anyone who likes shooting games, especially those who are nostalgic for the classic arcade games of the past. Here are some reasons why you should play Air Attack Old Version APK:

                                                                                                                                                -
                                                                                                                                                  -
                                                                                                                                                • It has stunning 3D graphics that make the game look realistic and detailed.
                                                                                                                                                • -
                                                                                                                                                • It has smooth and responsive gameplay that makes the controls easy and intuitive.
                                                                                                                                                • -
                                                                                                                                                • It has varied and challenging missions that keep you engaged and entertained.
                                                                                                                                                • -
                                                                                                                                                • It has a lot of weapons and special items that you can use to enhance your plane and your performance.
                                                                                                                                                • -
                                                                                                                                                • It has huge bosses that require strategy and skill to defeat.
                                                                                                                                                • -
                                                                                                                                                -

                                                                                                                                                Tips and tricks for playing Air Attack Old Version APK

                                                                                                                                                -

                                                                                                                                                Air Attack Old Version APK is not an easy game, so you will need tanks, ships, submarines, to giant robots and dragons. The game also has huge bosses that require strategy and skill to defeat.

                                                                                                                                                -

                                                                                                                                                air attack hd apk download
                                                                                                                                                -air attack ww2 shooter mod apk
                                                                                                                                                -air attack 2 old version
                                                                                                                                                -air attack hd full version apk
                                                                                                                                                -air attack hd lite apk
                                                                                                                                                -air attack 1945 mod apk
                                                                                                                                                -air attack hd android game
                                                                                                                                                -air attack hd apk free download
                                                                                                                                                -air attack hd apk mod
                                                                                                                                                -air attack hd apk obb
                                                                                                                                                -air attack hd apk old version
                                                                                                                                                -air attack hd apk revdl
                                                                                                                                                -air attack hd classic apk
                                                                                                                                                -air attack hd game download
                                                                                                                                                -air attack hd latest version apk
                                                                                                                                                -air attack hd premium apk
                                                                                                                                                -air attack hd pro apk
                                                                                                                                                -air attack hd unlimited money apk
                                                                                                                                                -air attack 2 mod apk download
                                                                                                                                                -air attack 2 mod apk unlimited money
                                                                                                                                                -air attack 2 mod apk android 1
                                                                                                                                                -air attack 2 mod apk rexdl
                                                                                                                                                -air attack 2 mod apk revdl
                                                                                                                                                -air attack 2 mod apk latest version
                                                                                                                                                -air attack 2 mod apk old version
                                                                                                                                                -air attack 2 hack apk download
                                                                                                                                                -air attack 2 full version apk download
                                                                                                                                                -air attack 2 full version mod apk
                                                                                                                                                -air attack 2 full unlocked apk
                                                                                                                                                -air attack 2 premium apk download
                                                                                                                                                -air attack 2 premium mod apk
                                                                                                                                                -air attack 2 unlimited coins apk
                                                                                                                                                -air attack 2 unlimited money and medals apk
                                                                                                                                                -air attack 2 v1.4.2 mod apk
                                                                                                                                                -download game air attack hd for android
                                                                                                                                                -download game air attack 2 mod apk
                                                                                                                                                -download game air attack 2 hack apk
                                                                                                                                                -download game air attack 2 full version free
                                                                                                                                                -download game air attack 2 premium free
                                                                                                                                                -download game air strike ww2 mod apk
                                                                                                                                                -how to install air attack hd apk + data
                                                                                                                                                -how to play air attack hd on pc
                                                                                                                                                -how to update air attack hd to latest version
                                                                                                                                                -how to get unlimited money in air attack 2
                                                                                                                                                -how to unlock all planes in air attack 2
                                                                                                                                                -best ww2 air combat games for android
                                                                                                                                                -best shmup games for android offline
                                                                                                                                                -best airplane shooting games for android free download
                                                                                                                                                -best 3d graphics games for android under 100mb

                                                                                                                                              • -
                                                                                                                                              -

                                                                                                                                              Cons

                                                                                                                                              -
                                                                                                                                                -
                                                                                                                                              • Requires Android 2.0.1 or higher

                                                                                                                                                -

                                                                                                                                                The game requires Android 2.0.1 or higher to run, which means that some older devices may not be able to play it. You should check the compatibility of your device before downloading the game.

                                                                                                                                              • -
                                                                                                                                              • May not be compatible with some devices

                                                                                                                                                -

                                                                                                                                                The game may not be compatible with some devices, especially those with low memory or low resolution. You may experience crashes, glitches, or poor performance if your device is not supported by the game. You should read the reviews and feedback of other users before downloading the game.

                                                                                                                                              • -
                                                                                                                                              • No online multiplayer mode

                                                                                                                                                -

                                                                                                                                                The game does not have an online multiplayer mode, which means that you cannot play with or against other players. You can only play the game solo or with a local co-op partner. You may find the game less fun or challenging if you are looking for a social or competitive experience.

                                                                                                                                              • -
                                                                                                                                              -

                                                                                                                                              Conclusion

                                                                                                                                              -

                                                                                                                                              Air Attack Old Version APK is a classic WW2 shooter game that will give you a thrilling and enjoyable gaming experience. You can fly various planes, customize your weapons and abilities, fight against 64 types of enemies, and face huge bosses in 10 exciting missions. The game also has stunning 3D graphics, sound effects, and music that create an immersive atmosphere. However, the game also has some drawbacks, such as requiring Android 2.0.1 or higher, being incompatible with some devices, and lacking an online multiplayer mode. You should weigh the pros and cons of the game before downloading it.

                                                                                                                                              -

                                                                                                                                              FAQs

                                                                                                                                              -

                                                                                                                                              Here are some frequently asked questions about Air Attack Old Version APK:

                                                                                                                                              -
                                                                                                                                                -
                                                                                                                                              1. Is Air Attack Old Version APK safe to download?
                                                                                                                                              2. -

                                                                                                                                                Yes, Air Attack Old Version APK is safe to download from a reputable source, such as APKCombo. However, you should always scan the file with an antivirus program before installing it, and be careful of any permissions or pop-ups that the game may ask for.

                                                                                                                                                -
                                                                                                                                              3. How much space does Air Attack Old Version APK take up?
                                                                                                                                              4. -

                                                                                                                                                Air Attack Old Version APK takes up about 25 MB of space on your device. However, you may need more space for additional data or updates that the game may require.

                                                                                                                                                -
                                                                                                                                              5. Can I play Air Attack Old Version APK offline?
                                                                                                                                              6. -

                                                                                                                                                Yes, you can play Air Attack Old Version APK offline without an internet connection. However, you may need an internet connection to download the game or access some features, such as leaderboards or achievements.

                                                                                                                                                -
                                                                                                                                              7. Can I play Air Attack Old Version APK with a controller?
                                                                                                                                              8. -

                                                                                                                                                Yes, you can play Air Attack Old Version APK with a controller if your device supports it. You can also use the tilt or touch controls to play the game.

                                                                                                                                                -
                                                                                                                                              9. Can I play Air Attack Old Version APK on PC?
                                                                                                                                              10. -

                                                                                                                                                No, you cannot play Air Attack Old Version APK on PC directly. However, you can use an Android emulator, such as BlueStacks or NoxPlayer, to run the game on your PC. You will need to download and install the emulator and the game on your PC, and then launch the game from the emulator.

                                                                                                                                                -

                                                                                                                                              401be4b1e0
                                                                                                                                              -
                                                                                                                                              -
                                                                                                                                              \ No newline at end of file diff --git a/spaces/fclong/summary/fengshen/examples/DAVAE/generate.py b/spaces/fclong/summary/fengshen/examples/DAVAE/generate.py deleted file mode 100644 index 5d5aebfeb8d68d77bc6c0045ea3c36d789de17ec..0000000000000000000000000000000000000000 --- a/spaces/fclong/summary/fengshen/examples/DAVAE/generate.py +++ /dev/null @@ -1,36 +0,0 @@ -# -*- encoding: utf-8 -*- -''' -Copyright 2022 The International Digital Economy Academy (IDEA). CCNL team. All rights reserved. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -@File : generate.py -@Time : 2022/11/04 19:17 -@Author : Liang Yuxin -@Version : 1.0 -@Contact : liangyuxin@idea.edu.cn -@License : (C)Copyright 2022-2023, CCNL-IDEA -''' -# here put the import lib - -import torch -from fengshen.models.DAVAE.DAVAEModel import DAVAEModel -from transformers import BertTokenizer,T5Tokenizer -device = torch.device("cuda" if torch.cuda.is_available() else "cpu") - -encoder_tokenizer = BertTokenizer.from_pretrained("IDEA-CCNL/Randeng-DAVAE-1.2B-General-Chinese") -decoder_tokenizer = T5Tokenizer.from_pretrained("IDEA-CCNL/Randeng-DAVAE-1.2B-General-Chinese", eos_token = '<|endoftext|>', pad_token = '',extra_ids=0) -decoder_tokenizer.add_special_tokens({'bos_token':''}) -vae_model = DAVAEModel.from_pretrained("IDEA-CCNL/Randeng-DAVAE-1.2B-General-Chinese").to(device) -input_texts = [ - "针对电力系统中的混沌振荡对整个互联电网的危害问题,提出了一种基于非线性光滑函数的滑模控制方法.", - "超市面积不算大.挺方便附近的居民购买的. 生活用品也比较齐全.价格适用中.", -] -output_texts = vae_model.simulate_batch(encoder_tokenizer,decoder_tokenizer,input_texts) -print(output_texts) diff --git a/spaces/feng2022/Time-TravelRephotography/Time_TravelRephotography/models/encoder4editing/utils/__init__.py b/spaces/feng2022/Time-TravelRephotography/Time_TravelRephotography/models/encoder4editing/utils/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/fffiloni/Image-to-MusicGen/CONTRIBUTING.md b/spaces/fffiloni/Image-to-MusicGen/CONTRIBUTING.md deleted file mode 100644 index 55b99140204d785d572ada9761dd77f302ae31c6..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/Image-to-MusicGen/CONTRIBUTING.md +++ /dev/null @@ -1,35 +0,0 @@ -# Contributing to Audiocraft - -We want to make contributing to this project as easy and transparent as -possible. - -## Pull Requests - -Audiocraft is the implementation of a research paper. -Therefore, we do not plan on accepting many pull requests for new features. -We certainly welcome them for bug fixes. - -1. Fork the repo and create your branch from `main`. -2. If you've added code that should be tested, add tests. -3. If you've changed APIs, update the documentation. -4. Ensure the test suite passes. -5. Make sure your code lints. -6. If you haven't already, complete the Contributor License Agreement ("CLA"). - -## Contributor License Agreement ("CLA") -In order to accept your pull request, we need you to submit a CLA. You only need -to do this once to work on any of Meta's open source projects. - -Complete your CLA here: - -## Issues -We use GitHub issues to track public bugs. Please ensure your description is -clear and has sufficient instructions to be able to reproduce the issue. - -Meta has a [bounty program](https://www.facebook.com/whitehat/) for the safe -disclosure of security bugs. In those cases, please go through the process -outlined on that page and do not file a public issue. - -## License -By contributing to encodec, you agree that your contributions will be licensed -under the LICENSE file in the root directory of this source tree. diff --git a/spaces/fffiloni/controlnet-animation-doodle/node_modules/serve-static/README.md b/spaces/fffiloni/controlnet-animation-doodle/node_modules/serve-static/README.md deleted file mode 100644 index 262d944ab7510ecb39b47055189a92793c94aa26..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/controlnet-animation-doodle/node_modules/serve-static/README.md +++ /dev/null @@ -1,257 +0,0 @@ -# serve-static - -[![NPM Version][npm-version-image]][npm-url] -[![NPM Downloads][npm-downloads-image]][npm-url] -[![Linux Build][github-actions-ci-image]][github-actions-ci-url] -[![Windows Build][appveyor-image]][appveyor-url] -[![Test Coverage][coveralls-image]][coveralls-url] - -## Install - -This is a [Node.js](https://nodejs.org/en/) module available through the -[npm registry](https://www.npmjs.com/). Installation is done using the -[`npm install` command](https://docs.npmjs.com/getting-started/installing-npm-packages-locally): - -```sh -$ npm install serve-static -``` - -## API - -```js -var serveStatic = require('serve-static') -``` - -### serveStatic(root, options) - -Create a new middleware function to serve files from within a given root -directory. The file to serve will be determined by combining `req.url` -with the provided root directory. When a file is not found, instead of -sending a 404 response, this module will instead call `next()` to move on -to the next middleware, allowing for stacking and fall-backs. - -#### Options - -##### acceptRanges - -Enable or disable accepting ranged requests, defaults to true. -Disabling this will not send `Accept-Ranges` and ignore the contents -of the `Range` request header. - -##### cacheControl - -Enable or disable setting `Cache-Control` response header, defaults to -true. Disabling this will ignore the `immutable` and `maxAge` options. - -##### dotfiles - - Set how "dotfiles" are treated when encountered. A dotfile is a file -or directory that begins with a dot ("."). Note this check is done on -the path itself without checking if the path actually exists on the -disk. If `root` is specified, only the dotfiles above the root are -checked (i.e. the root itself can be within a dotfile when set -to "deny"). - - - `'allow'` No special treatment for dotfiles. - - `'deny'` Deny a request for a dotfile and 403/`next()`. - - `'ignore'` Pretend like the dotfile does not exist and 404/`next()`. - -The default value is similar to `'ignore'`, with the exception that this -default will not ignore the files within a directory that begins with a dot. - -##### etag - -Enable or disable etag generation, defaults to true. - -##### extensions - -Set file extension fallbacks. When set, if a file is not found, the given -extensions will be added to the file name and search for. The first that -exists will be served. Example: `['html', 'htm']`. - -The default value is `false`. - -##### fallthrough - -Set the middleware to have client errors fall-through as just unhandled -requests, otherwise forward a client error. The difference is that client -errors like a bad request or a request to a non-existent file will cause -this middleware to simply `next()` to your next middleware when this value -is `true`. When this value is `false`, these errors (even 404s), will invoke -`next(err)`. - -Typically `true` is desired such that multiple physical directories can be -mapped to the same web address or for routes to fill in non-existent files. - -The value `false` can be used if this middleware is mounted at a path that -is designed to be strictly a single file system directory, which allows for -short-circuiting 404s for less overhead. This middleware will also reply to -all methods. - -The default value is `true`. - -##### immutable - -Enable or disable the `immutable` directive in the `Cache-Control` response -header, defaults to `false`. If set to `true`, the `maxAge` option should -also be specified to enable caching. The `immutable` directive will prevent -supported clients from making conditional requests during the life of the -`maxAge` option to check if the file has changed. - -##### index - -By default this module will send "index.html" files in response to a request -on a directory. To disable this set `false` or to supply a new index pass a -string or an array in preferred order. - -##### lastModified - -Enable or disable `Last-Modified` header, defaults to true. Uses the file -system's last modified value. - -##### maxAge - -Provide a max-age in milliseconds for http caching, defaults to 0. This -can also be a string accepted by the [ms](https://www.npmjs.org/package/ms#readme) -module. - -##### redirect - -Redirect to trailing "/" when the pathname is a dir. Defaults to `true`. - -##### setHeaders - -Function to set custom headers on response. Alterations to the headers need to -occur synchronously. The function is called as `fn(res, path, stat)`, where -the arguments are: - - - `res` the response object - - `path` the file path that is being sent - - `stat` the stat object of the file that is being sent - -## Examples - -### Serve files with vanilla node.js http server - -```js -var finalhandler = require('finalhandler') -var http = require('http') -var serveStatic = require('serve-static') - -// Serve up public/ftp folder -var serve = serveStatic('public/ftp', { index: ['index.html', 'index.htm'] }) - -// Create server -var server = http.createServer(function onRequest (req, res) { - serve(req, res, finalhandler(req, res)) -}) - -// Listen -server.listen(3000) -``` - -### Serve all files as downloads - -```js -var contentDisposition = require('content-disposition') -var finalhandler = require('finalhandler') -var http = require('http') -var serveStatic = require('serve-static') - -// Serve up public/ftp folder -var serve = serveStatic('public/ftp', { - index: false, - setHeaders: setHeaders -}) - -// Set header to force download -function setHeaders (res, path) { - res.setHeader('Content-Disposition', contentDisposition(path)) -} - -// Create server -var server = http.createServer(function onRequest (req, res) { - serve(req, res, finalhandler(req, res)) -}) - -// Listen -server.listen(3000) -``` - -### Serving using express - -#### Simple - -This is a simple example of using Express. - -```js -var express = require('express') -var serveStatic = require('serve-static') - -var app = express() - -app.use(serveStatic('public/ftp', { index: ['default.html', 'default.htm'] })) -app.listen(3000) -``` - -#### Multiple roots - -This example shows a simple way to search through multiple directories. -Files are searched for in `public-optimized/` first, then `public/` second -as a fallback. - -```js -var express = require('express') -var path = require('path') -var serveStatic = require('serve-static') - -var app = express() - -app.use(serveStatic(path.join(__dirname, 'public-optimized'))) -app.use(serveStatic(path.join(__dirname, 'public'))) -app.listen(3000) -``` - -#### Different settings for paths - -This example shows how to set a different max age depending on the served -file type. In this example, HTML files are not cached, while everything else -is for 1 day. - -```js -var express = require('express') -var path = require('path') -var serveStatic = require('serve-static') - -var app = express() - -app.use(serveStatic(path.join(__dirname, 'public'), { - maxAge: '1d', - setHeaders: setCustomCacheControl -})) - -app.listen(3000) - -function setCustomCacheControl (res, path) { - if (serveStatic.mime.lookup(path) === 'text/html') { - // Custom Cache-Control for HTML files - res.setHeader('Cache-Control', 'public, max-age=0') - } -} -``` - -## License - -[MIT](LICENSE) - -[appveyor-image]: https://badgen.net/appveyor/ci/dougwilson/serve-static/master?label=windows -[appveyor-url]: https://ci.appveyor.com/project/dougwilson/serve-static -[coveralls-image]: https://badgen.net/coveralls/c/github/expressjs/serve-static/master -[coveralls-url]: https://coveralls.io/r/expressjs/serve-static?branch=master -[github-actions-ci-image]: https://badgen.net/github/checks/expressjs/serve-static/master?label=linux -[github-actions-ci-url]: https://github.com/expressjs/serve-static/actions/workflows/ci.yml -[node-image]: https://badgen.net/npm/node/serve-static -[node-url]: https://nodejs.org/en/download/ -[npm-downloads-image]: https://badgen.net/npm/dm/serve-static -[npm-url]: https://npmjs.org/package/serve-static -[npm-version-image]: https://badgen.net/npm/v/serve-static diff --git a/spaces/flax-community/SentenceSimplifier/About/baseline.md b/spaces/flax-community/SentenceSimplifier/About/baseline.md deleted file mode 100644 index 40b8bae0393815ee9894a1435473fa881eac2c50..0000000000000000000000000000000000000000 --- a/spaces/flax-community/SentenceSimplifier/About/baseline.md +++ /dev/null @@ -1,3 +0,0 @@ -## Current Basline from [paper](https://arxiv.org/abs/1907.12461) - -![baseline](./images/baseline.png) \ No newline at end of file diff --git a/spaces/flax-community/t5-vae/README.md b/spaces/flax-community/t5-vae/README.md deleted file mode 100644 index b770ea38c3c6fba79d149f748a716d9556a13298..0000000000000000000000000000000000000000 --- a/spaces/flax-community/t5-vae/README.md +++ /dev/null @@ -1,33 +0,0 @@ ---- -title: T5 Vae -emoji: 🏃 -colorFrom: indigo -colorTo: blue -sdk: streamlit -app_file: app.py -pinned: false ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio` or `streamlit` - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code). -Path is relative to the root of the repository. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/foghuang/ChatGLM2-6B/evaluation/evaluate_ceval.py b/spaces/foghuang/ChatGLM2-6B/evaluation/evaluate_ceval.py deleted file mode 100644 index bfd317c44d17e6a88383210e0ca6bc1726fc423d..0000000000000000000000000000000000000000 --- a/spaces/foghuang/ChatGLM2-6B/evaluation/evaluate_ceval.py +++ /dev/null @@ -1,60 +0,0 @@ -import os -import glob -import re -import json -import torch -import torch.utils.data -from transformers import AutoTokenizer, AutoModel -from tqdm import tqdm - -tokenizer = AutoTokenizer.from_pretrained("THUDM/chatglm2-6b", trust_remote_code=True) -model = AutoModel.from_pretrained("THUDM/chatglm2-6b", trust_remote_code=True).bfloat16().cuda() - -choices = ["A", "B", "C", "D"] -choice_tokens = [tokenizer.encode(choice, add_special_tokens=False)[0] for choice in choices] - - -def build_prompt(text): - return "[Round {}]\n\n问:{}\n\n答:".format(1, text) - - -extraction_prompt = '综上所述,ABCD中正确的选项是:' - -accuracy_dict, count_dict = {}, {} -with torch.no_grad(): - for entry in glob.glob("./CEval/val/**/*.jsonl", recursive=True): - dataset = [] - with open(entry, encoding='utf-8') as file: - for line in file: - dataset.append(json.loads(line)) - correct = 0 - dataloader = torch.utils.data.DataLoader(dataset, batch_size=8) - for batch in tqdm(dataloader): - texts = batch["inputs_pretokenized"] - queries = [build_prompt(query) for query in texts] - inputs = tokenizer(queries, padding=True, return_tensors="pt", truncation=True, max_length=2048).to('cuda') - outputs = model.generate(**inputs, do_sample=False, max_new_tokens=512) - intermediate_outputs = [] - for idx in range(len(outputs)): - output = outputs.tolist()[idx][len(inputs["input_ids"][idx]):] - response = tokenizer.decode(output) - intermediate_outputs.append(response) - answer_texts = [text + intermediate + "\n" + extraction_prompt for text, intermediate in - zip(texts, intermediate_outputs)] - input_tokens = [build_prompt(answer_text) for answer_text in answer_texts] - inputs = tokenizer(input_tokens, padding=True, return_tensors="pt", truncation=True, max_length=2048).to('cuda') - outputs = model(**inputs, return_last_logit=True) - logits = outputs.logits[:, -1] - logits = logits[:, choice_tokens] - preds = logits.argmax(dim=-1) - correct += (preds.cpu() == batch["label"]).sum().item() - accuracy = correct / len(dataset) - print(entry, accuracy) - accuracy_dict[entry] = accuracy - count_dict[entry] = len(dataset) - -acc_total, count_total = 0.0, 0 -for key in accuracy_dict: - acc_total += accuracy_dict[key] * count_dict[key] - count_total += count_dict[key] -print(acc_total / count_total) \ No newline at end of file diff --git a/spaces/fuckyoudeki/AutoGPT/autogpt/commands/analyze_code.py b/spaces/fuckyoudeki/AutoGPT/autogpt/commands/analyze_code.py deleted file mode 100644 index e02ea4c5b4ba53530e559d1cab7a07b8e3c7c638..0000000000000000000000000000000000000000 --- a/spaces/fuckyoudeki/AutoGPT/autogpt/commands/analyze_code.py +++ /dev/null @@ -1,25 +0,0 @@ -"""Code evaluation module.""" -from __future__ import annotations - -from autogpt.llm_utils import call_ai_function - - -def analyze_code(code: str) -> list[str]: - """ - A function that takes in a string and returns a response from create chat - completion api call. - - Parameters: - code (str): Code to be evaluated. - Returns: - A result string from create chat completion. A list of suggestions to - improve the code. - """ - - function_string = "def analyze_code(code: str) -> List[str]:" - args = [code] - description_string = ( - "Analyzes the given code and returns a list of suggestions" " for improvements." - ) - - return call_ai_function(function_string, args, description_string) diff --git a/spaces/ganning/asl-gloss/README.md b/spaces/ganning/asl-gloss/README.md deleted file mode 100644 index 83e90555d0866a75a35e0f49a7ae709e86b374d7..0000000000000000000000000000000000000000 --- a/spaces/ganning/asl-gloss/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Asl Gloss -emoji: 🐢 -colorFrom: green -colorTo: yellow -sdk: gradio -sdk_version: 3.1.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/geekyrakshit/enhance-me/app.py b/spaces/geekyrakshit/enhance-me/app.py deleted file mode 100644 index 3367c25f80afd2a827c661d8020cec09f8cfd5a6..0000000000000000000000000000000000000000 --- a/spaces/geekyrakshit/enhance-me/app.py +++ /dev/null @@ -1,82 +0,0 @@ -import os -from PIL import Image -import streamlit as st -from tensorflow.keras import utils, backend - -from enhance_me import MIRNet, ZeroDCE - - -def get_mirnet_object() -> MIRNet: - utils.get_file( - "weights_lol_128.h5", - "https://github.com/soumik12345/enhance-me/releases/download/v0.2/weights_lol_128.h5", - cache_dir=".", - cache_subdir="weights", - ) - mirnet = MIRNet() - mirnet.build_model() - mirnet.load_weights("./weights/weights_lol_128.h5") - return mirnet - - -def get_zero_dce_object(model_alias: str) -> ZeroDCE: - utils.get_file( - f"{model_alias}.h5", - f"https://github.com/soumik12345/enhance-me/releases/download/v0.4/{model_alias}.h5", - cache_dir=".", - cache_subdir="weights", - ) - dce = ZeroDCE() - dce.load_weights(os.path.join("./weights", f"{model_alias}.h5")) - return dce - - -def main(): - st.markdown("# Enhance Me") - st.markdown("Made with :heart: by [geekyRakshit](http://github.com/soumik12345)") - application = st.sidebar.selectbox( - "Please select the application:", ("", "Low-light enhancement") - ) - if application != "": - if application == "Low-light enhancement": - uploaded_file = st.sidebar.file_uploader("Select your image:") - if uploaded_file is not None: - original_image = Image.open(uploaded_file) - st.image(original_image, caption="original image") - model_option = st.sidebar.selectbox( - "Please select the model:", - ( - "", - "MIRNet", - "Zero-DCE (dce_weights_lol_128)", - "Zero-DCE (dce_weights_lol_128_resize)", - "Zero-DCE (dce_weights_lol_256)", - "Zero-DCE (dce_weights_lol_256_resize)", - "Zero-DCE (dce_weights_unpaired_128)", - "Zero-DCE (dce_weights_unpaired_128_resize)", - "Zero-DCE (dce_weights_unpaired_256)", - "Zero-DCE (dce_weights_unpaired_256_resize)" - ), - ) - if model_option != "": - if model_option == "MIRNet": - st.sidebar.info("Loading MIRNet...") - mirnet = get_mirnet_object() - st.sidebar.info("Done!") - st.sidebar.info("Processing Image...") - enhanced_image = mirnet.infer(original_image) - st.sidebar.info("Done!") - st.image(enhanced_image, caption="enhanced image") - elif "Zero-DCE" in model_option: - model_alias = model_option[model_option.find("(") + 1: model_option.find(")")] - st.sidebar.info("Loading Zero-DCE...") - zero_dce = get_zero_dce_object(model_alias) - st.sidebar.info("Done!") - enhanced_image = zero_dce.infer(original_image) - st.sidebar.info("Done!") - st.image(enhanced_image, caption="enhanced image") - backend.clear_session() - - -if __name__ == "__main__": - main() diff --git a/spaces/georgefen/Face-Landmark-ControlNet/ldm/util.py b/spaces/georgefen/Face-Landmark-ControlNet/ldm/util.py deleted file mode 100644 index 45cb050ece6f401a22dde098ce3f1ff663c5eb6a..0000000000000000000000000000000000000000 --- a/spaces/georgefen/Face-Landmark-ControlNet/ldm/util.py +++ /dev/null @@ -1,197 +0,0 @@ -import importlib - -import torch -from torch import optim -import numpy as np - -from inspect import isfunction -from PIL import Image, ImageDraw, ImageFont - - -def log_txt_as_img(wh, xc, size=10): - # wh a tuple of (width, height) - # xc a list of captions to plot - b = len(xc) - txts = list() - for bi in range(b): - txt = Image.new("RGB", wh, color="white") - draw = ImageDraw.Draw(txt) - font = ImageFont.truetype('font/DejaVuSans.ttf', size=size) - nc = int(40 * (wh[0] / 256)) - lines = "\n".join(xc[bi][start:start + nc] for start in range(0, len(xc[bi]), nc)) - - try: - draw.text((0, 0), lines, fill="black", font=font) - except UnicodeEncodeError: - print("Cant encode string for logging. Skipping.") - - txt = np.array(txt).transpose(2, 0, 1) / 127.5 - 1.0 - txts.append(txt) - txts = np.stack(txts) - txts = torch.tensor(txts) - return txts - - -def ismap(x): - if not isinstance(x, torch.Tensor): - return False - return (len(x.shape) == 4) and (x.shape[1] > 3) - - -def isimage(x): - if not isinstance(x,torch.Tensor): - return False - return (len(x.shape) == 4) and (x.shape[1] == 3 or x.shape[1] == 1) - - -def exists(x): - return x is not None - - -def default(val, d): - if exists(val): - return val - return d() if isfunction(d) else d - - -def mean_flat(tensor): - """ - https://github.com/openai/guided-diffusion/blob/27c20a8fab9cb472df5d6bdd6c8d11c8f430b924/guided_diffusion/nn.py#L86 - Take the mean over all non-batch dimensions. - """ - return tensor.mean(dim=list(range(1, len(tensor.shape)))) - - -def count_params(model, verbose=False): - total_params = sum(p.numel() for p in model.parameters()) - if verbose: - print(f"{model.__class__.__name__} has {total_params*1.e-6:.2f} M params.") - return total_params - - -def instantiate_from_config(config): - if not "target" in config: - if config == '__is_first_stage__': - return None - elif config == "__is_unconditional__": - return None - raise KeyError("Expected key `target` to instantiate.") - return get_obj_from_str(config["target"])(**config.get("params", dict())) - - -def get_obj_from_str(string, reload=False): - module, cls = string.rsplit(".", 1) - if reload: - module_imp = importlib.import_module(module) - importlib.reload(module_imp) - return getattr(importlib.import_module(module, package=None), cls) - - -class AdamWwithEMAandWings(optim.Optimizer): - # credit to https://gist.github.com/crowsonkb/65f7265353f403714fce3b2595e0b298 - def __init__(self, params, lr=1.e-3, betas=(0.9, 0.999), eps=1.e-8, # TODO: check hyperparameters before using - weight_decay=1.e-2, amsgrad=False, ema_decay=0.9999, # ema decay to match previous code - ema_power=1., param_names=()): - """AdamW that saves EMA versions of the parameters.""" - if not 0.0 <= lr: - raise ValueError("Invalid learning rate: {}".format(lr)) - if not 0.0 <= eps: - raise ValueError("Invalid epsilon value: {}".format(eps)) - if not 0.0 <= betas[0] < 1.0: - raise ValueError("Invalid beta parameter at index 0: {}".format(betas[0])) - if not 0.0 <= betas[1] < 1.0: - raise ValueError("Invalid beta parameter at index 1: {}".format(betas[1])) - if not 0.0 <= weight_decay: - raise ValueError("Invalid weight_decay value: {}".format(weight_decay)) - if not 0.0 <= ema_decay <= 1.0: - raise ValueError("Invalid ema_decay value: {}".format(ema_decay)) - defaults = dict(lr=lr, betas=betas, eps=eps, - weight_decay=weight_decay, amsgrad=amsgrad, ema_decay=ema_decay, - ema_power=ema_power, param_names=param_names) - super().__init__(params, defaults) - - def __setstate__(self, state): - super().__setstate__(state) - for group in self.param_groups: - group.setdefault('amsgrad', False) - - @torch.no_grad() - def step(self, closure=None): - """Performs a single optimization step. - Args: - closure (callable, optional): A closure that reevaluates the model - and returns the loss. - """ - loss = None - if closure is not None: - with torch.enable_grad(): - loss = closure() - - for group in self.param_groups: - params_with_grad = [] - grads = [] - exp_avgs = [] - exp_avg_sqs = [] - ema_params_with_grad = [] - state_sums = [] - max_exp_avg_sqs = [] - state_steps = [] - amsgrad = group['amsgrad'] - beta1, beta2 = group['betas'] - ema_decay = group['ema_decay'] - ema_power = group['ema_power'] - - for p in group['params']: - if p.grad is None: - continue - params_with_grad.append(p) - if p.grad.is_sparse: - raise RuntimeError('AdamW does not support sparse gradients') - grads.append(p.grad) - - state = self.state[p] - - # State initialization - if len(state) == 0: - state['step'] = 0 - # Exponential moving average of gradient values - state['exp_avg'] = torch.zeros_like(p, memory_format=torch.preserve_format) - # Exponential moving average of squared gradient values - state['exp_avg_sq'] = torch.zeros_like(p, memory_format=torch.preserve_format) - if amsgrad: - # Maintains max of all exp. moving avg. of sq. grad. values - state['max_exp_avg_sq'] = torch.zeros_like(p, memory_format=torch.preserve_format) - # Exponential moving average of parameter values - state['param_exp_avg'] = p.detach().float().clone() - - exp_avgs.append(state['exp_avg']) - exp_avg_sqs.append(state['exp_avg_sq']) - ema_params_with_grad.append(state['param_exp_avg']) - - if amsgrad: - max_exp_avg_sqs.append(state['max_exp_avg_sq']) - - # update the steps for each param group update - state['step'] += 1 - # record the step after step update - state_steps.append(state['step']) - - optim._functional.adamw(params_with_grad, - grads, - exp_avgs, - exp_avg_sqs, - max_exp_avg_sqs, - state_steps, - amsgrad=amsgrad, - beta1=beta1, - beta2=beta2, - lr=group['lr'], - weight_decay=group['weight_decay'], - eps=group['eps'], - maximize=False) - - cur_ema_decay = min(ema_decay, 1 - state['step'] ** -ema_power) - for param, ema_param in zip(params_with_grad, ema_params_with_grad): - ema_param.mul_(cur_ema_decay).add_(param.float(), alpha=1 - cur_ema_decay) - - return loss \ No newline at end of file diff --git a/spaces/glt3953/app-text_generation_openai/README.md b/spaces/glt3953/app-text_generation_openai/README.md deleted file mode 100644 index 9e1f4af36ce42a3514de94c0e739f17e8ef038e3..0000000000000000000000000000000000000000 --- a/spaces/glt3953/app-text_generation_openai/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: App-text Generation Openai -emoji: 🔥 -colorFrom: purple -colorTo: pink -sdk: gradio -sdk_version: 3.35.2 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/gossminn/fillmorle-app/sftp/predictor/__init__.py b/spaces/gossminn/fillmorle-app/sftp/predictor/__init__.py deleted file mode 100644 index 591fe1601a28d616661017e9ae1af4ce5806f557..0000000000000000000000000000000000000000 --- a/spaces/gossminn/fillmorle-app/sftp/predictor/__init__.py +++ /dev/null @@ -1 +0,0 @@ -from .span_predictor import SpanPredictor diff --git a/spaces/gotiQspiryo/whisper-ui/examples/A Welcome 2 Karachi Torrent PORTABLE.md b/spaces/gotiQspiryo/whisper-ui/examples/A Welcome 2 Karachi Torrent PORTABLE.md deleted file mode 100644 index ece2c073808133ed3cff3bbe53f1a25d2d15406f..0000000000000000000000000000000000000000 --- a/spaces/gotiQspiryo/whisper-ui/examples/A Welcome 2 Karachi Torrent PORTABLE.md +++ /dev/null @@ -1,7 +0,0 @@ - -

                                                                                                                                              DoubleTree by Hilton Hotel Niagara Falls New York welcomes you with an innovative blend of contemporary design and traditional hospitality. The hotel is located in downtown Niagara Falls along the Niagara River. Park your vehicle in the hotel parking lot, check in and walk to one of the Natural Wonders of the World, Niagara Falls.

                                                                                                                                              -

                                                                                                                                              A Welcome 2 Karachi Torrent


                                                                                                                                              Download - https://urlgoal.com/2uyLTw



                                                                                                                                              -

                                                                                                                                              Featuring an indoor heated salt water pool and an on-site fitness centre, Wingate by Wyndham Niagara Falls is located in Niagara Falls, New York, just 300 metres from Maid of the Mist. Free WiFi is provided to guests, as is a free grab-and-go breakfast. A flat-screen TV with satellite channels is featured in each air-conditioned room at the hotel. An en suite bathroom is included as well and comes with a hairdryer and free toiletries. A 24-hour front desk welcomes guests to the Niagara Falls Wingate by Wyndham, which offers a snack bar and an on-site mini-market. A shared lounge and a business centre are also located on site. Buffalo Niagara Airport is only 31 km from Wingate Niagara Falls. The Niagara Falls Conference Center is 300 metres away, as well as Seneca Niagara Casino.

                                                                                                                                              -

                                                                                                                                              "Because half a dozen grasshoppers under a fern make the field ring with their importunate chink while thousands of great cattle, reposed beneath the shadow of the British oak, chew the cud and are silent, pray do not imagine that those who make the noise are the only inhabitants of the field-that, of course, they are many in number or that, after all, they are other than the little, shrivelled, meagre, hopping, though loud and troublesome insects of the hour."-Burke: "Reflections on the Revolution in France."THEY were sitting in the veranda of "the splendid palace of an Indian Pro-Consul"; surrounded by all the glory and mystery of the immemorial East. In plain English it was a one-storied, ten-roomed, whitewashed, mud-roofed bungalow, set in a dry garden of dusty tamarisk trees and divided from the road by a low mud wall. The green parrots screamed overhead as they flew in battalions to the river for their morning drink. Beyond the wall, clouds of fine dust showed where the cattle and goats of the city were passing afield to graze. The remorseless white light of the winter sunshine of Northern India lay upon everything and improved nothing, from the whining Peisian-wheel by the lawn-tennis court to the long perspective of level road and the blue, domed tombs of Mohammedan saints just visible above the trees."A Happy New Year," said Orde to his guest. "It's the first you've ever spent out of England, isn't it?""Yes. 'Happy New Year," said Pagett, smiling at the sunshine. "What a divine climate you have here! Just think of the brown cold fog hanging over London now!" And he rubbed his hands.It was more than twenty years since he had last seen Orde, his schoolmate, and their paths in the world had divided early. The one had quitted college to become a cog-wheel in the machinery of the great Indian Government; the other more blessed with goods, had been whirled into a similar position in the English scheme. Three successive elections had not affected Pagett's position with a loyal constituency, and he had grown insensibly to regard himself in some sort as a pillar of the Empire, whose real worth would be known later on. After a few years of conscientious attendance at many divisions, after newspaper battles innumerable and the publication of interminable correspondence, and more hasty oratory than in his calmer moments he cared to think upon, it occurred to him, as it had occurred to many of his fellows in Parliament, that a tour to India would enable him to sweep a larger lyre and address himself to the problems of Imperial administration with a firmer hand. Accepting, therefore, a general invitation extended to him by Orde some years before, Pagett had taken ship to Karachi, and only over-night had been received with joy by the Deputy-Commissioner of Amara. They had sat late, discussing the changes and chances of twenty years, recalling the names of the dead, and weighing the futures of the living, as is the custom of men meeting after intervals of action.Next morning they smoked the after breakfast pipe in the veranda, still regarding each other curiously, Pagett, in a light grey frock-coat and garments much too thin for the time of the year, and a puggried sun-hat carefully and wonderfully made. Orde in a shooting coat, riding breeches, brown cowhide boots with spurs, and a battered flax helmet. He had ridden some miles in the early morning to inspect a doubtful river dam. The men's faces differed as much as their attire. Orde's worn and wrinkled around the eyes, and grizzled at the temples, was the harder and more square of the two, and it was with something like envy that the owner looked at the comfortable outlines of Pagett's blandly receptive countenance, the clear skin, the untroubled eye, and the mobile, clean-shaved lips."And this is India!" said Pagett for the twentieth time staring long and intently at the grey feathering of tbe tamarisks."One portion of India only. It's very much like this for 300 miles in every direction. By the way, now that you have rested a little--I wouldn't ask the old question before--what d'you think of the country?"'Tis the most pervasive country that ever yet was seen. I acquired several pounds of your country coming up from Karachi. The air is heavy with it, and for miles and miles along that distressful eternity of rail there's no horizon to show where air and earth separate.""Yes. It isn't easy to see truly or far in India. But you had a decent passage out, hadn't you?""Very good on the whole. Your Anglo-Indian may be unsympathetic about one's political views; but he has reduced ship life to a science.""The Anglo-Indian is a political orphan, and if he's wise he won't be in a hurry to be adopted by your party grandmothers. But how were your companions, unsympathetic?""Well, there was a man called Dawlishe, a judge somewhere in this country it seems, and a capital partner at whist by the way, and when I wanted to talk to him about the progress of India in a political sense (Orde hid a grin, which might or might not have been sympathetic), the National Congress movement, and other things in which, as a Member of Parliament, I'm of course interested, he shifted the subject, and when I once cornered him, he looked me calmly in the eye, and said: 'That's all Tommy rot. Come and have a game at Bull.' You may laugh; but that isn't the way to treat a great and important question; and, knowing who I was. well. I thought it rather rude, don't you know; and yet Dawlishe is a thoroughly good fellow.""Yes; he's a friend of mine, and one of the straightest men I know. I suppose, like many Anglo-Indians, he felt it was hopeless to give you any just idea of any Indian question without the documents before you, and in this case the documents you want are the country and the people.""Precisely. That was why I came straight to you, bringing an open mind to bear on things. I'm anxious to know what popular feeling in India is really like y'know, now that it has wakened into political life. The National Congress, in spite of Dawlishe, must have caused great excitement among the masses?""On the contrary, nothing could be more tranquil than the state of popular feeling; and as to excitement, the people would as soon be excited over the 'Rule of Three' as over the Congress.""Excuse me, Orde, but do you think you are a fair judge? Isn't the official Anglo-Indian naturally jealous of any external influences that might move the masses, and so much opposed to liberal ideas, truly liberal ideas, that he can scarcely be expected to regard a popular movement with fairness?""What did Dawlishe say about Tommy Rot? Think a moment, old man. You and I were brought up together; taught by the same tutors, read the same books, lived the same life, and new languages, and work among new races; while you, more fortunate, remain at home. Why should I change my mind our mind-because I change my sky? Why should I and the few hundred Englishmen in my service become unreasonable, prejudiced fossils, while you and your newer friends alone remain bright and open-minded? You surely don't fancy civilians are members of a Primrose League?""Of course not, but the mere position of an English official gives him a point of view which cannot but bias his mind on this question." Pagett moved his knee up and down a little uneasily as he spoke."That sounds plausible enough, but, like more plausible notions on Indian matters, I believe it's a mistake. You'll find when you come to consult the unofficial Briton that our fault, as a class--I speak of the civilian now-is rather to magnify the progress that has been made toward liberal institutions. It is of English origin, such as it is, and the stress of our work since the Mutiny--only thirty years ago--has been in that direction. No, I think you will get no fairer or more dispassionate view of the Congress business than such men as I can give you. But I may as well say at once that those who know most of India, from the inside, are inclined to wonder at the noise our scarcely begun experiment makes in England.""But surely the gathering together of Congress delegates is of itself a new thing.""There's nothing new under the sun When Europe was a jungle half Asia flocked to the canonical conferences of Buddhism; and for centuries the people have gathered at Pun, Hurdwar, Trimbak, and Benares in immense numbers. A great meeting, what you call a mass meeting, is really one of the oldest and most popular of Indian institutions In the case of the Congress meetings, the only notable fact is that the priests of the altar are British, not Buddhist, Jam or Brahmanical, and that the whole thing is a British contrivance kept alive by the efforts of Messrs. Hume, Eardley, Norton, and Digby.""You mean to say, then, it s not a spontaneous movement?""What movement was ever spontaneous in any true sense of the word? This seems to be more factitious than usual. You seem to know a great deal about it; try it by the touchstone of subscriptions, a coarse but fairly trustworthy criterion, and there is scarcely the color of money in it. The delegates write from England that they are out of pocket for working expenses, railway fares, and stationery--the mere pasteboard and scaffolding of their show. It is, in fact, collapsing from mere financial inanition.""But you cannot deny that the people of India, who are, perhaps, too poor to subscribe, are mentally and morally moved by the agitation," Pagett insisted."That is precisely what I do deny. The native side of the movement is the work of a limited class, a microscopic minority, as Lord Dufferin described it, when compared with the people proper, but still a very interesting class, seeing that it is of our own creation. It is composed almost entirely of those of the literary or clerkly castes who have received an English education.""Surely that s a very important class. Its members must be the ordained leaders of popular thought.""Anywhere else they might he leaders, but they have no social weight in this topsy-turvy land, and though they have been employed in clerical work for generations they have no prac. tical knowledge of affairs. A ship's clerk is a useful person, but he it scarcely the captain; and an orderly-room writer, however smart he may be, is not the colonel. You see, the writer class in India has never till now aspired to anything like command. It wasn t allowed to. The Indian gentleman, for thousands of years past, has resembled Victor Hugo's noble:'Un vrai sire
                                                                                                                                              Chatelain
                                                                                                                                              Laisse ecrire
                                                                                                                                              Le vilain.
                                                                                                                                              Sa main digne
                                                                                                                                              Quand il signe
                                                                                                                                              Egratigne
                                                                                                                                              Le velin.And the little egralignures he most likes to make have been scored pretty deeply by the sword.""But this is childish and medheval nonsense!""Precisely; and from your, or rather our, point of view the pen is mightier than the sword. In this country it's otherwise. The fault lies in our Indian balances, not yet adjusted to civilized weights and measures.""Well, at all events, this literary class represent the natural aspirations and wishes of the people at large, though it may not exactly lead them, and, in spite of all you say, Orde, I defy you to find a really sound English Radical who would not sympathize with those aspirations."Pagett spoke with some warmth, and he had scarcely ceased when a well appointed dog-cart turned into the compound gates, and Orde rose saying:"Here is Edwards, the Master of the Lodge I neglect so diligently, come to talk about accounts, I suppose."As the vehicle drove up under the porch Pagett also rose, saying with the trained effusion born of much practice:"But this is also my friend, my old and valued friend Edwards. I'm delighted to see you. I knew you were in India, but not exactly where.""Then it isn't accounts, Mr. Edwards," said Orde, cheerily."Why, no, sir; I heard Mr. Pagett was coming, and as our works were closed for the New Year I thought I would drive over and see him.""A very happy thought. Mr. Edwards, you may not know, Orde, was a leading member of our Radical Club at Switebton when I was beginning political life, and I owe much to his exertions. There's no pleasure like meeting an old friend, except, perhaps, making a new one. I suppose, Mr. Edwards, you stick to the good old cause?""Well, you see, sir, things are different out here. There's precious little one can find to say against the Government, which was the main of our talk at home, and them that do say things are not the sort o' people a man who respects himself would like to be mixed up with. There are no politics, in a manner of speaking, in India. It's all work.""Surely you are mistaken, my good friend. Why I have come all the way from England just to see the working of this great National movement.""I don't know where you're going to find the nation as moves to begin with, and then you'll be hard put to it to find what they are moving about. It's like this, sir," said Edwards, who had not quite relished being called "my good friend." "They haven't got any grievance--nothing to hit with, don't you see, sir; and then there's not much to hit against, because the Government is more like a kind of general Providence, directing an old--established state of things, than that at home, where there's something new thrown down for us to fight about every three months.""You are probably, in your workshops, full of Eng'ish mechanics, out of the way of learning what the masses think.""I don't know so much about that. There are four of us English foremen, and between seven and eight hundred native fitters, smiths, carpenters, painters, and such like.""And they are full of the Congress, of course?""Never hear a word of it from year's end to year's end, and I speak the talk too. But I wanted to ask how things are going on at home--old Tyler and Brown and the rest?""We will speak of them presently, but your account of the indifference of your men surprises me almost as much as your own. I fear you are a backslider from the good old doctrine, Ed wards." Pagett spoke as one who mourned the death of a near relative."Not a bit, Sir, but I should be if I took up with a parcel of baboos, pleaders, and schoolboys, as never did a day's work in their lives, and couldn't if they tried. And if you was to poll us English railway men, mechanics, tradespeople, and the like of that all up and down the country from Peshawur to Calcutta, you would find us mostly in a tale together. And yet you know we're the same English you pay some respect to at home at 'lection time, and we have the pull o' knowing something about it.""This is very curious, but you will let me come and see you, and perhaps you will kindly show me the railway works, and we will talk things over at leisure. And about all old friends and old times," added Pagett, detecting with quick insight a look of disappointment in the mechanic's face.Nodding briefly to Orde, Edwards mounted his dog-cart and drove off."It's very disappointing," said the Member to Orde, who, while his friend discoursed with Edwards, had been looking over a bundle of sketches drawn on grey paper in purple ink, brought to him by a Chuprassee."Don't let it trouble you, old chap," 'said Orde, sympathetically. "Look here a moment, here are some sketches by the man who made the carved wood screen you admired so much in the dining-room, and wanted a copy of, and the artist himself is here too.""A native?" said Pagett."Of course," was the reply, "Bishen Siagh is his name, and he has two brothers to help him. When there is an important job to do, the three go 'ato partnership, but they spend most of their time and all their money in litigation over an inheritance, and I'm afraid they are getting involved, Thoroughbred Sikhs of the old rock, obstinate, touchy, bigoted, and cunning, but good men for all that. Here is Bishen Singn -shall we ask him about the Congress?"But Bishen Singh, who approached with a respectful salaam, had never heard of it, and he listened with a puzzled face and obviously feigned interest to Orde's account of its aims and objects, finally shaking his vast white turban with great significance when he learned that it was promoted by certam pleaders named by Orde, and by educated natives. He began with labored respect to explain how he was a poor man with no concern in such matters, which were all under the control of God, but presently broke out of Urdu into familiar Punjabi, the mere sound of which had a rustic smack of village smoke-reek and plough-tail, as he denounced the wearers of white coats, the jugglers with words who filched his field from him, the men whose backs were never bowed in honest work; and poured ironical scorn on the Bengali. He and one of his brothers had seen Calcutta, and being at work there had Bengali carpenters given to them as assistants."Those carpenters!" said Bishen Singh. "Black apes were more efficient workmates, and as for the Bengali babu-tchick!" The guttural click needed no interpretation, but Orde translated the rest, while Pagett gazed with in.. terest at the wood-carver."He seems to have a most illiberal prejudice against the Bengali," said the M.P."Yes, it's very sad that for ages outside Bengal there should he so bitter a prejudice. Pride of race, which also means race-hatred, is the plague and curse of India and it spreads far," pointed with his riding-whip to the large map of India on the veranda wall."See! I begin with the North," said he. "There's the Afghan, and, as a highlander, he despises all the dwellers in Hindoostan-with the exception of the Sikh, whom he hates as cordially as the Sikh hates him. The Hindu loathes Sikh and Afghan, and the Rajput--that's a little lower down across this yellow blot of desert--has a strong objection, to put it mildly, to the Maratha who, by the way, poisonously hates the Afghan. Let's go North a minute. The Sindhi hates everybody I've mentioned. Very good, we'll take less warlike races. The cultivator of Northern India domineers over the man in the next province, and the Behari of the Northwest ridicules the Bengali. They are all at one on that point. I'm giving you merely the roughest possible outlines of the facts, of course."Bishen Singh, his clean cut nostrils still quivering, watched the large sweep of the whip as it traveled from the frontier, through Sindh, the Punjab and Rajputana, till it rested by the valley of the Jumna"Hate--eternal and inextinguishable hate," concluded Orde, flicking the lash of the whip across the large map from East to West as he sat down. "Remember Canning's advice to Lord Granville, 'Never write or speak of Indian things without looking at a map.'"Pagett opened his eyes, Orde resumed. "And the race-hatred is only a part of it. What's really the matter with Bisben Singh is class-hatred, which, unfortunately, is even more intense and more widely spread. That's one of the little drawbacks of caste, which some of your recent English writers find an impeccable system."The wood-carver was glad to be recalled to the business of his craft, and his eyes shone as he received instructions for a carved wooden doorway for Pagett, which he promised should be splendidly executed and despatched to England in six months. It is an irrelevant detail, but in spite of Orde's reminders, fourteen months elapsed before the work was finished. Business over, Bishen Singh hung about, reluctant to take his leave, and at last joining his hands and approaching Orde with bated breath and whispering hum. bleness, said he had a petition to make. Orde's face suddenly lost all trace of expression. "Speak on, Bishen Singh," said he, and the carver in a whining tone explained that his case against his brothers was fixed for hearing b& fore a native judge and-here he dropped his voice still lower tid he was summarily stopped by Orde, who sternly pointed to the gate with an emphatic Begone!Bishen Singh, showing but little sign of discomposure, salaamed respectfully to the friends and departed.Pagett looked inquiry; Orde with complete recovery of his usual urbanity, replied: "It's nothing, only the old story, he wants his case to be tried by an English judge-they all do that-but when he began to hint that the other side were in improper relations with the native judge I had to shut him up. Gunga Ram, the man he wanted to make insinuations about, may not be very bright; but he's as honest as day-light on the bench. But that's just what one can't get a native to believe.""Do you really mean to say these people prefer to have their cases tried by English judges?"'Why, certainly."Pagett drew a long breath. "I didn't know that before." At this point a phaeton entered the compound, and Orde rose with "Confound it, there's old Rasul Ah Khan come to pay one of his tiresome duty calls. I'm afraid we shall never get through our little Congress discussion."Pagett was an aimost silent spectator of the grave formalities of a visit paid by a punctilious old Mahommedan gentleman to an Indian official; and was much impressed by the distinction of manner and fine appearance of the Mohammedan landholder. When the exhange of polite banalities came to a pause, he expressed a wish to learn the courtly visitor's opinion of the National Congress.Orde reluctantly interpreted, and with a smile which even Mohammedan politeness could not save from bitter scorn, Rasul Ah Khan intimated that he knew nothing about it and cared still less. It was a kind of talk encouraged by the Government for some mysterious purpose of its own, and for his own part he wondered and held his peace.Pagett was far from satisfied with this, and wished to have the old gentleman's opinion on the propriety of managing all Indian affairs on the basis of an elective system.Orde did his best to explain, but it was plain the visitor was bored and bewildered. Frankly, he didn't think much of committees; they had a Municipal Committee at Lahore and had elected a menial servant, an orderly, as a member. He had been informed of this on good authority, and after that, committees had ceased to interest him. But all was according to the rule of Government, and, please God, it was all for the best."What an old fossil it is!" cried Pagett, as Orde returned from seeing his guest to the door; "just like some old blue-blooded hidalgo of Spain. What does he really think of the Congress after all, and of the elective system?""Hates it all like poison. When you are sure of a majority, election is a fine system; but you can scarcely expect the Mahommedans, the mast mas terful and powerful minority in the country, to contemplate their own extinction with joy. The worst of it is that he and his co-religionists, who are many, and the landed proprietors, also, of Hindu race, are frightened and put out by this electiop business and by the importance we have bestowed on lawyers, pleaders, writers, and the like, who have, up to now, been in abject submission to them. They say little, hut after all they are the most important fagots in the great bundle of communities, and all the glib bunkum in the world would not pay for their estrangement. They have controlled the land.""But I am assured that experience of local self-government in your municipalities has been most satisfactory, and when once the principle is accepted in your centres, don't you know, it is bound to spread, and these important--ah'm people of yours would learn it like the rest. I see no difficulty at all," and the smooth lips closed with the complacent snap habitual to Pagett, M.P., the "man of cheerful yesterdays and confident to-morrows."Orde looked at him with a dreary smile."The privilege of election has been most reluctantly withdrawn from scores of municipalities, others have had to be summarily suppressed, and, outside the Presidency towns, the actual work done has been badly performed. This is of less moment, perhaps-it only sends up the local death-rates-than the fact that the public interest in municipal elections, never very strong, has waned, and is waning, in spite of careful nursing on the part of Government servants.""Can you explain this lack of interest?" said Pagett, putting aside the rest of Orde's remarks."You may find a ward of the key in the fact that only one in every thousand af our population can spell. Then they are infinitely more interested in religion and caste questions than in any sort of politics. When the business of mere existence is over, their minds are occupied by a series of interests, pleasures, rituals, superstitions, and the like, based on centuries of tradition and usage. You, perhaps, find it hard to conceive of people absolutely devoid of curiosity, to whom the book, the daily paper, and the printed speech are unknown, and you would describe their life as blank. That's a profound mistake. You are in another land, another century, down on the bed-rock of society, where the family merely, and not the community, is all-important. The average Oriental cannot be brought to look beyond his clan. His life, too, is naore complete and self-sufficing, and less sordid and low-thoughted than you might imagine. It is bovine and slow in some respects, but it is never empty. You and I are inclined to put the cart before the horse, and to forget that it is the man that is elemental, not the book.'The corn and the cattle are all my care, And the rest is the will of God.'Why should such folk look up from their immemorially appointed round of duty and interests to meddle with the unknown and fuss with voting-papers. How would you, atop of all your interests care to conduct even one-tenth of your life according to the manners and customs of the Papuans, let's say? That's what it comes to.""But if they won't take the trouble to vote, why do you anticipate that Mohammedans, proprietors, and the rest would be crushed by majorities of them?"Again Pagett disregarded the closing sentence."Because, though the landholders would not move a finger on any purely political question, they could be raised in dangerous excitement by religious hatreds. Already the first note of this has been sounded by the people who are trying to get up an agitation on the cow-killing question, and every year there is trouble over the Mohammedan Muharrum processions."But who looks after the popular rights, being thus unrepresented?""The Government of Hcr Majesty the Queen, Empress of India, in which, if the Congress promoters are to be believed, the people have an implicit trust; for the Congress circular, specially prepared for rustic comprehension, says the movement is 'for the remission of tax, the advancement of Hindnstan, and the strengthening of the British Govemment.' This paper is headed in large letters-'MAV THE PROSPEEITY OF THE EMPIRE OF INDIA ENDURE."'"Really!" said Pagett, "that shows some cleverness. But there are things better worth imi'ation in our English methods of-er-political statement than this sort of amiable fraud.""Anyhow," resumed Orde, "you perceive that not a word is said about elections and the elective principle, and the reticence of the Congress promoters here shows they are wise in their generation.""But the elective principle must triumph in the end, and the little difficulties you seem to anticipate would give way on the introduction of a well-balanced scheme, capable of indefinite extension.""But is it possible to devise a scheme which, always assuming that the people took any interest in it, without enormous expense, ruinous dislocation of the administ:ation and danger to the public peace, can satisfy the aspirations of Mr. Hume and his following, and yet safeguard the interests of the Mahommedans, the landed and wealthy classes, the Conservative Hindus, the Eurasians, Parsees, Sikhs, Rajputs, native Christians, domiciled Europeans and others, who are each important and powerful in their way?"Pagett's attention, however, was diverted to the gate, where a group of cultivators stood in apparent hesitation."Here are the twelve Apostles, hy Jove -come straight out of Raffaele's cartoons," said the M.P., with the fresh appreciation of a newcomer.Orde, loth to be interrupted, turned impatiently toward the villagers, and their leader, handing his long staff to one of his companions, advanced to the house."It is old Jelbo, the Lumherdar, or head-man of Pind Sharkot, and a very' intelligent man for a villager."The Jat farmer had removed his shoes and stood smiling on the edge of the veranda. His strongly marked features glowed with russet bronze, and his bright eyes gleamed under deeply set brows, contracted by lifelong exposure to sunshine. His beard and moustache streaked with grey swept from bold cliffs of brow and cheek in the large sweeps one sees drawn by Michael Angelo, and strands of long black hair mingled with the irregularly piled wreaths and folds of his turban. The drapery of stout blue cotton cloth thrown over his broad shoulders and girt round his narrow loins, hung from his tall form in broadly sculptured folds, and he would have made a superb model for an artist in search of a patriarch.Orde greeted him cordially, and after a polite pause the countryman started off with a long story told with impressive earnestness. Orde listened and smiled, interrupting the speaker at 'times to argue and reason with him in a tone which Pagett could hear was kindly, and finally checking the flux of words was about to dismiss him, when Pagett suggested that he should be asked about the National Congress.But Jelloc had never heard of it. He was a poor man and such things, by the favor of his Honor, did not concern him."What's the matter with your big friend that he was so terribly in earnest?" asked Pagett, when he had left."Nothing much. He wants the blood of the people in the next village, who have had smallpox and cattle plague pretty badly, and by the help of a wizard, a currier, and several pigs have passed it on to his own village. 'Wants to know if they can't be run in for this awful crime. It seems they made a dreadful charivari at the village boundary, threw a quantity of spell-bearing objects over the border, a buffalo's skull and other things; then branded a chamur-what you would call a currier-on his hinder parts and drove him and a number of pigs over into JelIno's village. Jelbo says he can bring evidence to prove that the wizard directing these proceedings, who is a Sansi, has been guilty of theft, arson, rattle-killing, perjury and murder, but would prefer to have him punished for bewitching them and inflicting small-pox.""And how on earth did you answer such a lunatic?""Lunatic I the old fellow is as sane as you or I; and he has some ground of complaint against those Sansis. I asked if he would likc a native superintendent of police with some men to make inquiries, but he objected on the grounds the police were rather worse than smallpox and criminal tribes put together.""Criminal tribes-er-I don't quite understand," said Paget~"We have in India many tribes of people who in the slack anti-British days became robbers, in various kind. and preye~ on the people. They are being restrained and reclaimed little by little, and in time will become useful; citizens, but they still cherish hereditary traditions of crime, and are a difficult lot to deal with. By the way what; about the political rights of these folk under your schemes? The country people call them vermin, but I sup-pose they would be electors with the rest.""Nonsense-special provision would be made for them in a well-considered electoral scheme, and they would doubtless be treated with fitting severity," said Pagett, with a magisterial air."Severity, yes-but whether it would be fitting is doubtful. Even those poor devils have rights, and, after all, they only practice what they have been taught.""But criminals, Ordel""Yes, criminals with codes and rituals of crime, gods and godlings of crime, and a hundred songs and sayings in praise of it. Puzzling, isn't it?""It's simply dreadful. They ought to be put down at once. Are there many of them?""Not more than about sixty thousand in this province, for many of the trlbes broadly described as criminal are really vagabond and crimlnal only on occasion, while others are being settled and reclaimed. They are of great antiquity, a legacy from the past, the golden, glorious Aryan past of Max Muller, Birdwood and the rest of your spindrift philosophers."An orderly brought a card to Orde who took it with a movement of irritation at the interruption, and banded it to Pagett; a large card with a ruled border in red ink, and in the centre in schoolboy copper plate, Mr. Dma Nath. "Give salaam," said the civilian, and there entered in haste a slender youth, clad in a closely fitting coat of grey homespun, tight trousers, patent-leather shoes, and a small black velvet cap. His thin cheek twitched, and his eyes wandered restlessly, for the young man was evidently nervous and uncomfortable, though striving to assume a free and easy air."Your honor may perhaps remember me," he said in Englisb, and Orde scanned him keenly."I know your face somehow. You belonged to the Shershah district I think, when I was in charge there?""Yes, Sir, my father is writer at Shershah, and your honor gave me a prize when I was first in the Middle School examination five years ago. Since then I have prosecuted my studies, and I am now second year's student in the Mission College.""Of course: you are Kedar Nath's son -the boy who said he liked geography better than play or sugar cakes, and I didn't believe you. How is your father getting on?""He is well, and he sends his salaam, but his circumstances are depressed, and be also is down on his luck.""You learn English idiom". at the Mission College, it seems.""Yes, sir, they are the best idioms, and my father ordered me to ask your honor to say a word for him to the present incumbent of your honor's shoes, the latchet of which he is not worthy to open, and who knows not Joseph; for things are different at Sher shah now, and my father wants promotion.""Your father is a good man, and I will do what I can for him."At this point a telegram was handed to Orde, who, after glancing at it, said he must leave his young friend whom he introduced to Pagett, "a member of the English House of Commons who wishes to learn about India."Orde bad scarcely retired with his telegram when Pagett began:"Perhaps you can tell me something of the National Congress movement?""Sir, it is the greatest movement of modern times, and one in which all edvcated men like us must join. All our students are for the Congress.""Excepting, I suppose, Mahommedans, and the Christians?" said Pagett, quick to use his recent instruction."These are some mere exceptions to the universal rule.""But the people outside the College, the working classes, the agriculturists; your father and mother, for instance.""My mother," said the young man, with a visible effort to bring himself to pronounce the word, "has no ideas, and my father is not agriculturist, nor working class; he is of the Kayeth caste; but he had not the advantage of a collegiate education, and he does not know much of the Congress. It is a movement for the educated young-man" -connecting adjective and noun in a sort of vocal hyphen."Ah, yes," said Pagett, feeling he was a little off the rails, "and what are the benefits you expect to gain by it?""Oh, sir, everything. England owes its greatness to Parliamentary institutions, and we should at once gain the same high position in scale of nations. Sir, we wish to have the sciences, the arts, the manufactures, the industrial factories, with steam engines, and other motive powers and public meetings, and debates. Already we have a debating club in connection with the college, and elect a Mr. Speaker. Sir, the progress must come. You also are a Member of Parliament and worship the great Lord Ripon," said the youth, breathlessly, and his black eyes flashed as he finished his commaless sentences."Well," said Pagett, drily, "it has not vet occurred to me to worship his Lord-ship, although I believe he is a very worthy man, and I am not sure that England owes quite all the things you name to the House of Commons. You see, my young friend, the growth of a nation like ours is slow, subject to many influences, and if you have read your history aright"-"Sir. I know it all-all! Norman Conquest, Magna Charta, Runnymede, Reformation, Tudors, Stuarts, Mr. Milton and Mr. Burke, and I have read something of Mr. Herbert Spencer and Gibbon's 'Decline and Fall,' Reynolds' Mysteries of the Court,' and Pagett felt like one who had pulled the string of a shower-bath unawares, and hastened to stop the torrent with a qtlestion as to what particular grievances of the people of India the attention of an elected assembly should be first directed. But young Mr. Dma Nath was slow to particularize. There were many, very many demanding consideration. Mr. Pagett would like to hear of one or two typical examples. The Repeal of the Arms Act was at last named, and the student learned for the first time that a license was necessary before an Englishman could carry a gun in England. Then natives of India ought to be allowed to become Volunteer Riflemen if they chose, and the absolute equality of the Oriental with his European fellow-subject in civil status should be proclaimed on principle, and the Indian Army should be considerably reduced. The student was not, however, prepared with answers to Mr. Pagett's mildest questions on these points, and he returned to vague generalities, leaving the M.P. so much impressed with the crudity of his views that he was glad on Orde's return to say good-bye to his "very interesting" young friend."What do you think of young India?" asked Orde."Curious, very curious-and callow.""And yet," the civilian replied, "one can scarcely help sympathizing with him for his mere youth's sake. The young orators of the Oxford Union arrived at the same conclusions and showed doubtless just the same enthusiasm. If there were any political analogy between India and England, if the thousand races of this Empire were one, if there were any chance even of their learning to speak one language, if, in short, India were a Utopia of the debating-room, and not a real land, this kind of talk might be worth listening to, but it is all based on false analogy and ignorance of the facts.""But he is a native and knows the facts.""He is a sort of English schoolboy, but married three years, and the father of two weaklings, and knows less than most English schoolboys. You saw all he is and knows, and such ideas as he has acquired are directly hostile to the most cherished convictions of the vast majority of the people.""But what does he mean by saying he is a student of a mission college? Is he a Christian?""He meant just what he said, and he is not a Christian, nor ever will he be. Good people in America, Scotland and England, most of whom would never dream of collegiate education for their own sons, are pinching themselves to bestow it in pure waste on Indian youths. Their scheme is an oblique, subterranean attack on heathenism; the theory being that with the jam of secular education, leading to a University degree, the pill of moral or religious instruction may he coaxed down the heathen gullet.""But does it succeed; do they make converts?""They make no converts, for the subtle Oriental swallows the jam and rejects the pill; but the mere example of the sober, righteous, and godly lives of the principals and professors who are most excellent and devoted men, must have a certain moral value. Yet, as Lord Lansdowne pointed out the other day, the market is dangerously overstocked with graduates of our Universities who look for employment in the administration. An immense number are employed, but year by year the college mills grind out increasing lists of youths foredoomed to failure and disappointment, and meanwhile, trade. manufactures. and the industrial arts are neglected, and in fact regarded with contempt by our new literary mandarins in posse.""But our young friend said he wanted steam-engines and factories," said Pagett."Yes, he would like to direct such concerns. He wants to begin at the top, for manual labor is held to be discreditable, and he would never defile his hands by the apprenticeship which the architects, engineers, and manufacturers of England cheerfully undergo; and he would be aghast to learn that the leading names of industrial enterprise in England belonged a generation or two since, or now belong, to men who wrought with their own hands. And, though he talks glibly of manufacturers, he refuses to see that the Indian manufacturer of the future will be the despised workman of the present. It was proposed, for example, a few weeks ago, that a certain municipality in this province should establish an elementary technical school for the sons of workmen. The stress of the opposition to the plan came from a pleader who owed all he had to a college education bestowed on him gratis by Government and missions. You would have fancied some fine old crusted Tory squire of the last generation was speaking. 'These people,' he said, 'want no education, for they learn their trades from their fathers, and to teach a workman's son the elements of mathematics and physical science would give him ideas above his business. They must be kept in their place, and it was idle to imagine that there was any science in wood or iron work.' And he carried his point. But the Indian workman will rise in the social scale in spite of the new literary caste.""In England we have scarcely begun to realize that there is an industrial class in this country, yet, I suppose, the example of men, like Edwards for instance, must tell," said Pagett, thoughtfully."That you shouldn't know much about it is natural enough, for there are but few sources of information. India in this, as in other respects, is like a badly kept ledger-not written up to date. And men like Edwards are, in reality, missionaries, who by precept and example are teaching more lessons than they know. Only a few, however, of their crowds of subordinates seem to care to try to emulate them, and aim at individual advancement; the rest drop into the ancient Indian caste gr('ove.""How do you mean?" asked he, "Well, it is found that the new railway and factory workmen, the fitter, the smith, the engine-driver, and the rest are already forming separate hereditary castes. You may notice this down at Jamalpur in Bengal, one of the oldest railway centres; and at other places, and in other industries, they are following the same inexorable Indian law.""Which means?" queried Pagett."It means that the rooted habit of the people is to gather in small self-contained, self-sufficing family groups with no thought or care for any interests but their own-a habit which is scarcely compatible with the right acceptation of the elective principle.""Yet you must admit, Orde, that though our young friend was not able to expound tbe faith that is in him, your Indian army is too big.""Not nearly big enough for its main purpose. And, as a side issue, there are certain powerful minorities of fighting folk whose interests an Asiatic Government is bound to consider. Arms is as much a means of livelihood as civil employ under Government and law. And it would be a heavy strain on British bayonets to hold down Sikhs, Jats, Bilochis, Rohillas, Rajputs, Bhils, Dogras, Pahtans, and Gurkbas to abide by the decisions of a numerical majority opposed to their interests. Leave the 'numerical majority' to itself without the British bayonets-a flock of sheep might as reasonably hope to manage a troop of collies.""This complaint about excessive growth of the army is akin to another contention of the Congress party. They protest against the malversation of the whole of the moneys raised by additional taxes as a Famine Insurance Fund to other purposes. You must be aware that this special Famine Fund has all been spent on frontier roads and defences and strategic railway schemes as a protection against Russia.""But there was never a special famine fund raised by special taxation and put by as in a box. No sane administrator would dream of such a thing. In a time of prosperity a finance minister, rejoicing in a margin, proposed to annually apply a million and a half to the construction of railways and canals for the protection of districts liable to scarcity, and to the reduction of the annual loans for public works. But times were not always prosperous, and the finance minister had to choose whether be would bang up the insurance scheme for a year or impose fresh taxation. When a farmer hasn't got the little surplus he hoped to have for buying a new wagon and draining a low-lying field corner, you don't accuse him of malversation, if he spends what he has on the necessary work of the rest of his farm."A clatter of hoofs was heard, and Orde looked up with vexation, but his brow cleared as a horseman halted under the porch."HelIn, Orde! just looked in to ask if you are coming to polo on Tuesday: we want you badly to help to crumple up the Krab Bokbar team."Orde explained that he had to go out into the District, and while the visitor complained that though good men wouldn't play, duffers were always keen, and that his side would probalny be beaten, Pagett rose to look at his mount, a red, lathered Biloch mare, with a curious lyre-like incurving of the ears. "Quite a little thoroughbred in all other respects," said the M.P., and Orde presented Mr. Reginald Burke, Manager of the Siad and Sialkote Bank to his friend."Yes, she's as good as they make 'em, and she's all the female I possess and spoiled in consequence, aren't you, old girl?" said Burke, patting the mare's glossy neck as she backed and plunged."Mr. Pagett," said Orde, "has been asking me about the Congress. What is your opinion?" Burke turned to the M. P. with a frank smile."Well, if it's all the same to you, sir, I should say, Damn the Congress, but then I'm no politician, but only a business man.""You find it a tiresome subject?""Yes, it's all that, and worse than that, for this kind of agitation is anything but wholesome for the country.""How do you mean?""It would be a long job to explain, and Sara here won't stand, but you know how sensitive capital is, and how timid investors are. All this sort of rot is likely to frighten them, and we can't afford to frighten them. The passengers aboard an Ocean steamer don't feel reassured when the ship's way is stopped, and they hear the workmen's hammers tinkering at the engines down below. The old Ark's going on all right as she is, and only wants quiet and room to move. Them's my sentiments, and those of some other people who have to do with money and business.""Then you are a thick-and-thin supporter of the Government as it is.""Why, no! The Indian Government is much too timid with its money-like an old maiden aunt of mine-always in a funk about her investments. They don't spend half enough on railways for instance, and they are slow in a general way, and ought to be made to sit up in all that concerns the encouragement of private enterprise, and coaxing out into use the millions of capital that lie dormant in the country."The mare was dancing with impatience, and Burke was evidently anxious to be off, so the men wished him good-bye."Who is your genial friend who condemns both Congress and Government in a breath?" asked Pagett, with an amused smile."Just now he is Reggie Burke, keener on polo than on anything else, but if you go to the Sind and Sialkote Bank to-morrow you would find Mr. Reginald Burke a very capable man of business, known and liked by an immense constituency North and South of this.""Do you think he is right about the Government's want of enterpnse?""I should hesitate to say. Better consult the merchants and chambers of commerce in Cawnpore, Madras, Bombay, and Calcutta. But though these bodies would like, as Reggie puts it, to make Government sit up, it is an elementary consideration in governing a country like India, which must be administered for the benefit of the people at large, that the counsels of those who resort to it for the sake of making money should be judiciously weighed and not allowed to overpower the rest. They are welcome guests here, as a matter of course, but it has been found best to restrain their influence. Thus the rights of plantation laborers, factory operatives, and the like, have been protected, and the capitalist, eager to get on, has not always regarded Government action with favor. It is quite conceivable that under an elective system the commercial communities of the great towns might find means to secure majorities on labor questions and on financial matters.""They would act at least with intelligence and consideration.""Intelligence, yes; but as to consideration, who at the present moment most bitterly resents the tender solicitude of Lancashire for the welfare and protection of the Indian factory operative? English and native capitalists running cotton mills and factories.""But is the solicitude of Lancashire in this matter entirely disinterested?""It is no business of mine to say. I merely indicate an example of how a powerful commercial interest might hamper a Government intent in the first place on the larger interests of humanity."Orde broke off to listen a moment. "There's Dr. Lathrop talking to my wife in the drawing-room," said he."Surely not; that's a lady's voice, and if my ears don't deceive me, an American.""Exactly, Dr. Eva McCreery Lathrop, chief of the new Women's Hospital here, and a very good fellow forbye. Good-morning, Doctor," he said, as a graceful figure came out on the veranda, "you seem to be in trouble. I hope Mrs. Orde was able to help you.""Your wife is real kind and good, ] always come to her when I'm in a fix but I fear it's more than comforting I want.""You work too hard and wear yourself out," said Orde, kindly. "Let me introduce my friend, Mr. Pagett, just fresh from home, and anxious to learn his India. You could tell him something of that more important half of which a mere man knows so little.""Perhaps I could if I'd any heart to do it, but I'm in trouble, I've lost a case, a case that was doing well, through nothing in the world but inattention on the part of a nurse I had begun to trust. And when I spoke only a small piece of my mind she collapsed in a whining heap on the floor. It is hopeless."The men were silent, for the blue eyes of the lady doctor were dim. Recovering herself she looked up with a smile, half sad, half humorous, "And I am in a whining heap, too; but what phase of Indian life are you particularly interested in, sir?""Mr. Pagett intends to study the political aspect of things and the possibility of bestowing electoral institutions on the people.""Wouldn't it be as much to the purpose to bestow point-lace collars on them? They need many things more urgently than votes. Why it's like giving a bread-pill for a broken leg.""Er-I don't quite follow," said Pagett, uneasily."Well, what's the matter with this country is not in the least political, but an all round entanglement of physical, social, and moral evils and corruptions, all more or less due to the unnatural treatment of women. You can't gather figs from thistles, and so long as the system of infant marriage, the prohibition of the remarriage of widows, the lifelong imprisonment of wives and mothers in a worse than penal confinement, and the withholding from them of any kind of education or treatment as rational beings continues, the country can't advance a step. Half of it is morally dead, and worse than dead, and that's just the half from which we have a right to look for the best impulses. It's right here where the trouble is, and not in any political considerations whatsoever.""But do they marry so early?" said Pagett, vaguely."The average age is seven, but thousands are married still earlier. One result is that girls of twelve and thirteen have to bear the burden of wifehood and motherhood, and, as might be expected, the rate of mortality both for mothers and children is terrible. Pauperism, domestic unhappiness, and a low state of health are only a few of the consequences of this. Then, when, as frequently happens, the boy-husband dies prematurely, his widow is condemned to worse than death. She may not re-marry, must live a secluded and despised life, a life so unnatural that she sometimes prefers suicide; more often she goes astray. You don't know in England what such words as 'infant-marriage, baby-wife, girl-mother, and virgin-widow' mean; but they mean unspeakable horrors here.""Well, but the advanced political party here will surely make it their business to advocate social reforms as well as political ones," said Pagett."Very surely they will do no such thing," said the lady doctor, emphatically. "I wish I could make you understand. Why, even of the funds devoted to the Marchioness of Dufferin's organization for medical aid to the women of India, it was said in print and in speech, that they would be better spent on more college scholarships for men. And in all the advanced parties' talk-God forgive them--and in all their programmes, they carefully avoid all such subjects. They will talk about the protection of the cow, for that's an ancient superstition--they can all understand that; but the protection of the women is a new and dangerous idea." She turned to Pagett impulsively:"You are a member of the English Parliament. Can you do nothing? The foundations of their life are rotten-utterly and bestially rotten. I could tell your wife things that I couldn't tell you. I know the life--the inner life that belongs to the native, and I know nothing else; and believe me you might as well try to grow golden-rod in a mushroom-pit as to make anything of a people that are born and reared as these --these things're. The men talk of their rights and privileges. I have seen the women that bear these very men, and again-may God forgive the men!"Pagett's eyes opened with a large wonder. Dr. Lathrop rose tempestuously."I must be off to lecture," said she, "and I'm sorry that I can't show you my hospitals; but you had better believe, sir, that it's more necessary for India than all the elections in creation.""That's a woman with a mission, and no mistake," said Pagett, after a pause."Yes; she believes in her work, and so do I," said Orde. "I've a notion that in the end it will be found that the most helpful work done for India in this generation was wrought by Lady Dufferin in drawing attention-what work that was, by the way, even with her husband's great name to back it to the needs of women here. In effect, native habits and beliefs are an organized conspiracy against the laws of health and happy life--but there is some dawning of hope now.""How d' you account for the general indifferencc, then?""I suppose it's due in part to their fatalism and their utter indifference to all human suffering. How much do you imagine the great province of the Pun-jab with over twenty million people and half a score rich towns has contributed to the maintenance of civil dispensaries last year? About seven thousand rupees.""That's seven hundred pounds," said Pagett, quickly."I wish it was," replied Orde; "but anyway, it's an absurdly inadequate sum, and shows one of the blank sides of Oriental character."Pagett was silent for a long time. The question of direct and personal pain did not lie within his researches. He pre ferred to discuss the weightier matters of the law, and contented himself with murmuring: "They'll do better later on." Then, with a rush, returning to his first thought:"But, my dear Orde, if it's merely a class movement of a local and temporary character, how d' you account for Bradlaugh, who is at least a man of sense taking it up?""I know nothing of the champion of the New Brahmins but what I see in the papers. I suppose there is something tempting in being hailed by a large assemblage as the representative of the aspirations of two hundred and fifty millions of people. Such a man looks 'through all the roaring and the wreaths,' and does not reflect that it is a false perspective, which, as a matter of fact, hides the real complex and manifold India from his gaze. He can scarcely be expected to distinguish between the ambitions of a new oligarchy and the real wants of the people of whom he knows nothing. But it's strange that a professed Radical should come to be the chosen advocate of a movement which has for its aim the revival of an ancient tyranny. Shows how even Radicalism can fall into academic grooves and miss the essential truths of its own creed. Believe me, Pagett, to deal with India you want first-hand knowledge and experience. I wish he would come and live here for a couple of years or so.""Is not this rather an ad hminem style of argument?""Can't help it in a case like this. Indeed, I am not sure you ought not to go further and weigh the whole character and quality and upbringing of the man. You must admit that the monumental complacency with which he trotted out his ingenious little Constitution for India showed a strange want of imagination and the sense of humor.""No, I don't quite admit it," said Pagett."Well, you know him and I don't, but that's how it strikes a stranger." He turned on his heel and paced the veranda thoughtfully. "And, after all, the burden of the actual, daily unromantic toil falls on the shoulders of the men out here, and not on his own. He enjoys all the privileges of recommendation without responsibility, and we-well, perhaps, when you've seen a little more of India you'll understand. To begin with, our death rate's five times higher than yours-I speak now for the brutal bureaucrat--and we work on the refuse of worked-out cities and exhausted civilizations, among the bones of the dead."Pagett laughed. "That's an epigrammatic way of putting it, Orde.""Is it? Let's see," said the Deputy Commissioner of Amara, striding into the sunshine toward a half-naked gardener potting roses. He took the man's hoe, and went to a rain-scarped bank at the bottom of the garden."Come here, Pagett," he said, and cut at the sun-baked soil. After three strokes there rolled from under the blade of the hoe the half of a clanking skeleton that settled at Pagett's feet in an unseemly jumble of bones. The M.P. drew back."Our houses are built on cemeteries," said Orde. "There are scores of thousands of graves within ten miles."Pagett was contemplating the skull with the awed fascination of a man who has but little to do with the dead. "India's a very curious place," said he, after a pause."Ah? You'll know all about it in three months. Come in to lunch," said Orde.

                                                                                                                                              aaccfb2cb3
                                                                                                                                              -
                                                                                                                                              -
                                                                                                                                              \ No newline at end of file diff --git a/spaces/gotiQspiryo/whisper-ui/examples/Download iTunes 9.2.1 for Mac The Latest Version of Apples Music Player.md b/spaces/gotiQspiryo/whisper-ui/examples/Download iTunes 9.2.1 for Mac The Latest Version of Apples Music Player.md deleted file mode 100644 index c5ceef88aecb8d64a00860a980599ebfa4823c2f..0000000000000000000000000000000000000000 --- a/spaces/gotiQspiryo/whisper-ui/examples/Download iTunes 9.2.1 for Mac The Latest Version of Apples Music Player.md +++ /dev/null @@ -1,30 +0,0 @@ - -

                                                                                                                                              Sometimes users need to install an old version of iTunes, on Mac or Windows computer to perform some specific tasks that must require old iTunes version and probably not possible with the latest iTunes. So, here you can download old iTunes versions from official Apple links.

                                                                                                                                              -

                                                                                                                                              Download Itunes 9.2.1 For Mac


                                                                                                                                              Download Filehttps://urlgoal.com/2uyLNI



                                                                                                                                              -

                                                                                                                                              Apple today has released iTunes 9.2.1 for both Windows and Mac OS X. This is a minor release which mainly brings stability & performance improvements, along with bug fixes when taking backups and syncing all the iOS based devices. Full change log is as follows.

                                                                                                                                              -

                                                                                                                                              Warning: Jailbreakers must stay away from iTunes 9.2.1 until further notice. update: iTunes 9.2.1 is fine for already jailbroken devices, also new jailbreaks via RedSn0w/PwnageTool. Avoid it for now for Fresh Spirit jailbreaks.

                                                                                                                                              -

                                                                                                                                              iTunes is a multimedia player and device manager. A comprehensive music store, originally made for Apple users to download and play music. Now available on Windows too, iTunes is the perfect place to organise music, watch TV shows and movies, create playlists and more. Through iTunes, users are equipped to record CDs, edit music files, purchase music and videos from the iTunes Store, and basically access music easily and legally

                                                                                                                                              -

                                                                                                                                              -

                                                                                                                                              iTunes is an excellent program to organize and a personal music repository. Thousands of downloaded songs and albums are easy to categorize, sort, and create custom playlists. Essentially an mp3 player, iTunes allows users to shuffle through their song collection, to experience randomized song playlists, enhancing the listening experience. For Apple users, iTunes can sync on multiple devices, so you can maintain the iTunes library across devices. The visualizer feature adds graphical effects to the song being played, adding a new layer to the music experience. iTunes is also a great platform for movies and TV shows, One can edit, rename, or move files, and even change the file format. The music can be played across devices, which are not restricted to Apple devices anymore. One can easily purchase music through the iTunes store, and have access to millions of songs and albums from across the world. iTunes also now has a Radio feature, which connects you to radio streaming channels from across the world, covering a number of genres, topics, and music preferences. While predominantly perceived as a music organizer, iTunes also has a vast library of e-books, podcasts, audiobooks, videos, and more.iTunes is, in fact, ideal to store audiobooks and organize them properly.

                                                                                                                                              -

                                                                                                                                              An excellent user-friendly interface, with simple navigation and features, makes iTunes a great platform to store and experience music. For those users who remember having to sieve through messy folders for music, iTunes came as an absolute boon. Now, apps and music are so readily available, iTunes is still the place to store an entire library of downloaded mp3, which is not streaming online, but owned by the user. Whether users access it from the Mac or from an iPhone or any Windows system, the quality of the program and the seamless experience remains the same.

                                                                                                                                              -

                                                                                                                                              The biggest drawback of iTunes is the restrictive nature of acquiring music or accessing new music. While the program itself is free, most new features require a paid subscription. Radio or streaming of videos also requires a paid subscription or is pay-to-download. Other music apps or even YouTube, which offer so much readily available free content for streaming are fast becoming the user preference. Music lovers do not now need to download songs, that they can stream any time. Unless one has a large library of downloaded music, millennial listeners just need to go on any streaming app and get access to unlimited songs and music videos.

                                                                                                                                              -

                                                                                                                                              An excellent multimedia library and device manager, that lets users maintain a personalized library of songs that are saved, instead of streamed. The downloaded or ripped files belong to the user and are accessible at any time. Even in the age of online streaming, there is merit in having a collection of music, video, and books, just for personal consumption, and iTunes allows users to maintain this expansive collection. iTunes is easy to use, great for sorting music according to artists, albums, genres, and songs. Even with no internet or net facility, one can access music on the device and is a personalized media consumption. iTunes is the go-to library for users with a vast collection of music, and with a Windows version, it now has a reach much wider than the Apple loyalists.

                                                                                                                                              -

                                                                                                                                              Apple has released tvOS 9.2.1 for the Apple TV 4th generation, and WatchOS 2.2.1 for Apple Watch. Both releases include bug fixes and minor feature enhancements and are recommended for the respective devices.

                                                                                                                                              -

                                                                                                                                              iTunes 12.4 can be downloaded through the Mac App Store, or through the iTunes app itself in Windows or Mac OS X. Users can also choose to download the update through the official iTunes website at apple.com/itunes.

                                                                                                                                              -

                                                                                                                                              When we connect the iPhone to this computer, we get an error window saying "The iPhone cannot be used because it requires iTunes version 10.1 or later. Go to www.itunes.com to download the latest version of iTunes."

                                                                                                                                              -

                                                                                                                                              However, the iTunes download page says that iTunes 10.1.2 requires OS X 10.5 or later, and indeed, trying to download and install that version results in an error window saying "This package type requires Mac OS X 10.5."

                                                                                                                                              -

                                                                                                                                              (This is all a bit surprising, because iTunes 9.2.1 worked fine with an older iPhone and with an iPod Touch. It's hard for me to imagine what new magic in 10.5 is required to support syncing with a slightly newer model of iPhone.)

                                                                                                                                              -

                                                                                                                                              Title:iTunes 9.2.1 (64-bit) File Size:93.2 MB Requirements:Windows Vista64 / Windows7 64 / Windows8 64 Language:en-us License:Freeware Date Added:20 Jul 2010 Publisher:Apple Inc Homepage: MD5 Checksum:E7F469AA8F8576A17185970447E89FE6

                                                                                                                                              -

                                                                                                                                              On September 1, 2010, Apple held their annual music press event where they unveiled an updated version: iTunes 10. The new version was available for download later that day. One major feature includes the integration of "iTunes Ping", which brings a social factor to the iTunes experience. Apple CEO Steve Jobs also announced a new logo, one without a CD in the background because of the increasing popularity of iTunes digital downloads.

                                                                                                                                              -

                                                                                                                                              In October 2012, Apple announced the launch of the iPhone 5 and iPad Mini, the refresh of the iPod and Mac lines, and the upcoming release of iTunes 11. Slated for release by the end of October, the launch was pushed back to November 29, 2012. This version included tighter integration with iCloud, and a new user interface. Users' libraries now include all media they have stored in their iCloud account, along with any media unique to the device they are using. Media files stored in the cloud don't need to be downloaded before playing, allowing a larger collection to be accessible without increased disk usage. The new user interface includes a refreshed grid view, which replaces Cover Flow as the default layout method. With this change, Cover Flow is no longer available within the application. With the release of this software, the iTunes Store was redesigned to remain consistent with the new interface, and the stores available on iOS devices. The social element Ping was also removed and replaced by increased Twitter and Facebook integration. Other minor changes included disabling the sidebar by default, and slightly altering the icon to match that of the Mac App Store better.

                                                                                                                                              -

                                                                                                                                              Apple noted that iTunes 2 would be included on every new Mac system beginning in November, once the product ships. It requires Mac OS 9.2.1 or Mac OS X 10.1 or later. It is available now for free download.

                                                                                                                                              -

                                                                                                                                              If there's an update present, iTunes will notify you of it. Simply go ahead with the installation process and iTunes will even download the IPSW file for you so you don't have to do much legwork from your end.

                                                                                                                                              -

                                                                                                                                              iTunes download for Windows looks highly similar to the Mac app. The entertainment tool retains its white-colored interface, clean and minimalistic function placement, and convenient navigation. However, the features that make it stand apart from other similar applications are high-quality music downloads with no expiration date, multiple device support, family sharing, and a free trial of Apple Music.

                                                                                                                                              -

                                                                                                                                              Once downloaded, you can use it to access music files saved on your dashboard, listen to the radio, or buy music from the iTunes Store. All your purchases get saved in your library, and you can download them as and when you like. Downloading iTunes also gives you a free trial of Apple Music, a streaming service with over 70 million songs.

                                                                                                                                              -

                                                                                                                                              We are currently blocking the Apple update sites with our firewall to prevent users upgrading their iPads to IOS 9.3. However there are many that I want upgraded to 9.2.1. Im downloading all the .ipsw files. Can I manually add these to the caching server?

                                                                                                                                              -

                                                                                                                                              -08D2-4E0A-A5CD-155E345EFB83
                                                                                                                                              In short, if I'm reading that right if one device has downloaded 9.2.1 on the network then it should already be cached. Of course forcing iOS devices to get 9.2.1 is a different story. Caching server will not do that.

                                                                                                                                              -

                                                                                                                                              I have it configured to download iOS for all models and Mac OS updates, but be warned -- CacheWarmer generates a massive amount of network traffic as soon as Apple releases software. When Apple seeded the updated iOS 9.3 build yesterday, each of my servers downloaded over 200 GB of incremental upgrade packages for every version of iOS and every variety of iOS device.

                                                                                                                                              -

                                                                                                                                              I have one server scheduled to check for new packages hourly (default) and the other only checks in the evening, so one server can download all the files from Apple and then pass them on to the other later that day.

                                                                                                                                              -

                                                                                                                                              The problem I am trying to solve is that my network admin blocked access to the Apple update sites in order to prevent users from installing IOS 9.3. However I do want them all up to 9.2.1 and was trying to see if I could get 9.2.1 on the caching server.

                                                                                                                                              aaccfb2cb3
                                                                                                                                              -
                                                                                                                                              -
                                                                                                                                              \ No newline at end of file diff --git a/spaces/gotiQspiryo/whisper-ui/examples/HD Online Player (Raees hindi movie mp4 free download) - Download Raees in mp4 format and watch it on any device.md b/spaces/gotiQspiryo/whisper-ui/examples/HD Online Player (Raees hindi movie mp4 free download) - Download Raees in mp4 format and watch it on any device.md deleted file mode 100644 index 955dff5f6c6937b9acf1c9a753952eedf0f6da6c..0000000000000000000000000000000000000000 --- a/spaces/gotiQspiryo/whisper-ui/examples/HD Online Player (Raees hindi movie mp4 free download) - Download Raees in mp4 format and watch it on any device.md +++ /dev/null @@ -1,6 +0,0 @@ -

                                                                                                                                              HD Online Player (Raees hindi movie mp4 free download)


                                                                                                                                              Download Ziphttps://urlgoal.com/2uyLGF



                                                                                                                                              -
                                                                                                                                              - aaccfb2cb3
                                                                                                                                              -
                                                                                                                                              -
                                                                                                                                              -

                                                                                                                                              diff --git a/spaces/gotiQspiryo/whisper-ui/examples/Macdrive 9 Serial Number Texture.md b/spaces/gotiQspiryo/whisper-ui/examples/Macdrive 9 Serial Number Texture.md deleted file mode 100644 index d3e57b57bb9faa9ccca67db556c591c0ff8649cf..0000000000000000000000000000000000000000 --- a/spaces/gotiQspiryo/whisper-ui/examples/Macdrive 9 Serial Number Texture.md +++ /dev/null @@ -1,6 +0,0 @@ -

                                                                                                                                              macdrive 9 serial number texture


                                                                                                                                              Download 🆓 https://urlgoal.com/2uyLV9



                                                                                                                                              - - 899543212b
                                                                                                                                              -
                                                                                                                                              -
                                                                                                                                              -

                                                                                                                                              diff --git a/spaces/gradio/HuBERT/fairseq/dataclass/configs.py b/spaces/gradio/HuBERT/fairseq/dataclass/configs.py deleted file mode 100644 index b0146fa4c7332c9f8b1f6bcff7977399dfc46f08..0000000000000000000000000000000000000000 --- a/spaces/gradio/HuBERT/fairseq/dataclass/configs.py +++ /dev/null @@ -1,990 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import sys -from dataclasses import _MISSING_TYPE, dataclass, field -from typing import Any, List, Optional - -import torch - -from fairseq.dataclass.constants import ( - DATASET_IMPL_CHOICES, - DDP_BACKEND_CHOICES, - DDP_COMM_HOOK_CHOICES, - GENERATION_CONSTRAINTS_CHOICES, - GENERATION_DECODING_FORMAT_CHOICES, - LOG_FORMAT_CHOICES, - PIPELINE_CHECKPOINT_CHOICES, - PRINT_ALIGNMENT_CHOICES, - ZERO_SHARDING_CHOICES, -) - -from omegaconf import II, MISSING - - -@dataclass -class FairseqDataclass: - """fairseq base dataclass that supported fetching attributes and metas""" - - _name: Optional[str] = None - - @staticmethod - def name(): - return None - - def _get_all_attributes(self) -> List[str]: - return [k for k in self.__dataclass_fields__.keys()] - - def _get_meta( - self, attribute_name: str, meta: str, default: Optional[Any] = None - ) -> Any: - return self.__dataclass_fields__[attribute_name].metadata.get(meta, default) - - def _get_name(self, attribute_name: str) -> str: - return self.__dataclass_fields__[attribute_name].name - - def _get_default(self, attribute_name: str) -> Any: - if hasattr(self, attribute_name): - if str(getattr(self, attribute_name)).startswith("${"): - return str(getattr(self, attribute_name)) - elif str(self.__dataclass_fields__[attribute_name].default).startswith( - "${" - ): - return str(self.__dataclass_fields__[attribute_name].default) - elif ( - getattr(self, attribute_name) - != self.__dataclass_fields__[attribute_name].default - ): - return getattr(self, attribute_name) - - f = self.__dataclass_fields__[attribute_name] - if not isinstance(f.default_factory, _MISSING_TYPE): - return f.default_factory() - return f.default - - def _get_type(self, attribute_name: str) -> Any: - return self.__dataclass_fields__[attribute_name].type - - def _get_help(self, attribute_name: str) -> Any: - return self._get_meta(attribute_name, "help") - - def _get_argparse_const(self, attribute_name: str) -> Any: - return self._get_meta(attribute_name, "argparse_const") - - def _get_argparse_alias(self, attribute_name: str) -> Any: - return self._get_meta(attribute_name, "argparse_alias") - - def _get_choices(self, attribute_name: str) -> Any: - return self._get_meta(attribute_name, "choices") - - -@dataclass -class CommonConfig(FairseqDataclass): - # This is the core dataclass including common parameters shared by all different jobs. Please append your params to other dataclasses if they were - # used for a particular purpose or task, such as those dedicated for `distributed training`, `optimization`, etc. - no_progress_bar: bool = field( - default=False, metadata={"help": "disable progress bar"} - ) - log_interval: int = field( - default=100, - metadata={ - "help": "log progress every N batches (when progress bar is disabled)" - }, - ) - log_format: Optional[LOG_FORMAT_CHOICES] = field( - default=None, metadata={"help": "log format to use"} - ) - log_file: Optional[str] = field( - default=None, metadata={"help": "log file to copy metrics to."} - ) - tensorboard_logdir: Optional[str] = field( - default=None, - metadata={ - "help": "path to save logs for tensorboard, should match --logdir " - "of running tensorboard (default: no tensorboard logging)" - }, - ) - wandb_project: Optional[str] = field( - default=None, - metadata={"help": "Weights and Biases project name to use for logging"}, - ) - azureml_logging: Optional[bool] = field( - default=False, metadata={"help": "Log scalars to AzureML context"}, - ) - seed: int = field( - default=1, metadata={"help": "pseudo random number generator seed"} - ) - cpu: bool = field(default=False, metadata={"help": "use CPU instead of CUDA"}) - tpu: bool = field(default=False, metadata={"help": "use TPU instead of CUDA"}) - bf16: bool = field(default=False, metadata={"help": "use bfloat16; implies --tpu"}) - memory_efficient_bf16: bool = field( - default=False, - metadata={ - "help": "use a memory-efficient version of BF16 training; implies --bf16" - }, - ) - fp16: bool = field(default=False, metadata={"help": "use FP16"}) - memory_efficient_fp16: bool = field( - default=False, - metadata={ - "help": "use a memory-efficient version of FP16 training; implies --fp16" - }, - ) - fp16_no_flatten_grads: bool = field( - default=False, metadata={"help": "don't flatten FP16 grads tensor"} - ) - fp16_init_scale: int = field( - default=2 ** 7, metadata={"help": "default FP16 loss scale"} - ) - fp16_scale_window: Optional[int] = field( - default=None, - metadata={"help": "number of updates before increasing loss scale"}, - ) - fp16_scale_tolerance: float = field( - default=0.0, - metadata={ - "help": "pct of updates that can overflow before decreasing the loss scale" - }, - ) - on_cpu_convert_precision: bool = field( - default=False, - metadata={ - "help": "if set, the floating point conversion to fp16/bf16 runs on CPU. " - "This reduces bus transfer time and GPU memory usage." - } - ) - min_loss_scale: float = field( - default=1e-4, - metadata={"help": "minimum FP16/AMP loss scale, after which training is stopped"}, - ) - threshold_loss_scale: Optional[float] = field( - default=None, metadata={"help": "threshold FP16 loss scale from below"} - ) - amp: bool = field(default=False, metadata={"help": "use automatic mixed precision"}) - amp_batch_retries: int = field( - default=2, - metadata={"help": "number of retries of same batch after reducing loss scale with AMP"}, - ) - amp_init_scale: int = field( - default=2 ** 7, metadata={"help": "default AMP loss scale"} - ) - amp_scale_window: Optional[int] = field( - default=None, - metadata={"help": "number of updates before increasing AMP loss scale"}, - ) - user_dir: Optional[str] = field( - default=None, - metadata={ - "help": "path to a python module containing custom extensions (tasks and/or architectures)" - }, - ) - empty_cache_freq: int = field( - default=0, - metadata={"help": "how often to clear the PyTorch CUDA cache (0 to disable)"}, - ) - all_gather_list_size: int = field( - default=16384, - metadata={"help": "number of bytes reserved for gathering stats from workers"}, - ) - model_parallel_size: int = field( - default=1, metadata={"help": "total number of GPUs to parallelize model over"} - ) - quantization_config_path: Optional[str] = field( - default=None, metadata={"help": "path to quantization config file"} - ) - profile: bool = field( - default=False, metadata={"help": "enable autograd profiler emit_nvtx"} - ) - reset_logging: bool = field( - default=False, - metadata={ - "help": "when using Hydra, reset the logging at the beginning of training" - }, - ) - suppress_crashes: bool = field( - default=False, - metadata={ - "help": "suppress crashes when training with the hydra_train entry point so that the " - "main method can return a value (useful for sweeps)" - }, - ) - use_plasma_view: bool = field( - default=False, metadata={"help": "Store indices and sizes in shared memory"} - ) - plasma_path: Optional[str] = field( - default="/tmp/plasma", - metadata={ - "help": "path to run plasma_store, defaults to /tmp/plasma. Paths outside /tmp tend to fail." - }, - ) - - -@dataclass -class DistributedTrainingConfig(FairseqDataclass): - distributed_world_size: int = field( - default=max(1, torch.cuda.device_count()), - metadata={ - "help": "total number of GPUs across all nodes (default: all visible GPUs)" - }, - ) - distributed_num_procs: Optional[int] = field( - default=max(1, torch.cuda.device_count()), - metadata={ - "help": "total number of processes to fork (default: all visible GPUs)" - }, - ) - distributed_rank: Optional[int] = field( - default=0, metadata={"help": "rank of the current worker"} - ) - distributed_backend: str = field( - default="nccl", metadata={"help": "distributed backend"} - ) - distributed_init_method: Optional[str] = field( - default=None, - metadata={ - "help": "typically tcp://hostname:port that will be used to " - "establish initial connetion" - }, - ) - distributed_port: int = field( - default=-1, - metadata={ - "help": "port number (not required if using --distributed-init-method)" - }, - ) - device_id: int = field( - default=0, - metadata={ - "help": "which GPU to use (usually configured automatically)", - "argparse_alias": "--local_rank", - }, - ) - distributed_no_spawn: bool = field( - default=False, - metadata={ - "help": "do not spawn multiple processes even if multiple GPUs are visible" - }, - ) - ddp_backend: DDP_BACKEND_CHOICES = field( - default="pytorch_ddp", metadata={"help": "DistributedDataParallel backend"} - ) - ddp_comm_hook: DDP_COMM_HOOK_CHOICES = field( - default="none", metadata={"help": "communication hook"} - ) - bucket_cap_mb: int = field( - default=25, metadata={"help": "bucket size for reduction"} - ) - fix_batches_to_gpus: bool = field( - default=False, - metadata={ - "help": "don't shuffle batches between GPUs; this reduces overall " - "randomness and may affect precision but avoids the cost of re-reading the data" - }, - ) - find_unused_parameters: bool = field( - default=False, - metadata={ - "help": "disable unused parameter detection (not applicable to " - "--ddp-backend=legacy_ddp)" - }, - ) - fast_stat_sync: bool = field( - default=False, - metadata={"help": "[deprecated] this is now defined per Criterion"}, - ) - heartbeat_timeout: int = field( - default=-1, - metadata={ - "help": "kill the job if no progress is made in N seconds; " - "set to -1 to disable" - }, - ) - broadcast_buffers: bool = field( - default=False, - metadata={ - "help": "Copy non-trainable parameters between GPUs, such as " - "batchnorm population statistics" - }, - ) - slowmo_momentum: Optional[float] = field( - default=None, - metadata={ - "help": "SlowMo momentum term; by default use 0.0 for 16 GPUs, " - "0.2 for 32 GPUs; 0.5 for 64 GPUs, 0.6 for > 64 GPUs" - }, - ) - slowmo_algorithm: str = field( - default="LocalSGD", metadata={"help": "whether to use LocalSGD or SGP"} - ) - localsgd_frequency: int = field( - default=3, metadata={"help": "Local SGD allreduce frequency"} - ) - nprocs_per_node: int = field( - default=max(1, torch.cuda.device_count()), - metadata={ - "help": "number of GPUs in each node. An allreduce operation across GPUs in " - "a node is very fast. Hence, we do allreduce across GPUs in a node, " - "and gossip across different nodes" - }, - ) - pipeline_model_parallel: bool = field( - default=False, - metadata={"help": "if set, use pipeline model parallelism across GPUs"}, - ) - pipeline_balance: Optional[str] = field( - default=None, - metadata={ - "help": "partition the model into N_K pieces, where each piece " - "contains N_i layers. The sum(args.pipeline_balance) " - "should equal the total number of layers in the model" - }, - ) - pipeline_devices: Optional[str] = field( - default=None, - metadata={ - "help": "a list of device indices indicating which device to place " - "each of the N_K partitions. The length of this list should " - "equal the length of the --pipeline-balance argument" - }, - ) - pipeline_chunks: Optional[int] = field( - default=0, metadata={"help": "microbatch count for pipeline model parallelism"} - ) - pipeline_encoder_balance: Optional[str] = field( - default=None, - metadata={ - "help": "partition the pipeline parallel encoder into N_K pieces, where each piece " - "contains N_i layers. The sum(args.pipeline_encoder_balance) " - "should equal the total number of encoder layers in the model" - }, - ) - pipeline_encoder_devices: Optional[str] = field( - default=None, - metadata={ - "help": "a list of device indices indicating which device to place " - "each of the N_K partitions. The length of this list should " - "equal the length of the --pipeline-encoder-balance argument" - }, - ) - pipeline_decoder_balance: Optional[str] = field( - default=None, - metadata={ - "help": "partition the pipeline parallel decoder into N_K pieces, where each piece " - "contains N_i layers. The sum(args.pipeline_decoder_balance) " - "should equal the total number of decoder layers in the model" - }, - ) - pipeline_decoder_devices: Optional[str] = field( - default=None, - metadata={ - "help": "a list of device indices indicating which device to place " - "each of the N_K partitions. The length of this list should " - "equal the length of the --pipeline-decoder-balance argument" - }, - ) - pipeline_checkpoint: PIPELINE_CHECKPOINT_CHOICES = field( - default="never", - metadata={"help": "checkpointing mode for pipeline model parallelism"}, - ) - zero_sharding: ZERO_SHARDING_CHOICES = field( - default="none", metadata={"help": "ZeRO sharding"} - ) - fp16: bool = II("common.fp16") - memory_efficient_fp16: bool = II("common.memory_efficient_fp16") - tpu: bool = II("common.tpu") - # configuration for --ddp-backend=fully_sharded - no_reshard_after_forward: bool = field( - default=False, metadata={"help": "don't reshard parameters after forward pass"}, - ) - fp32_reduce_scatter: bool = field( - default=False, metadata={"help": "reduce-scatter grads in FP32"}, - ) - cpu_offload: bool = field( - default=False, metadata={"help": "offload FP32 params to CPU"} - ) - use_sharded_state: bool = field( - default=False, metadata={"help": "use sharded checkpoint files"}, - ) - - -@dataclass -class DatasetConfig(FairseqDataclass): - num_workers: int = field( - default=1, metadata={"help": "how many subprocesses to use for data loading"} - ) - skip_invalid_size_inputs_valid_test: bool = field( - default=False, - metadata={"help": "ignore too long or too short lines in valid and test set"}, - ) - max_tokens: Optional[int] = field( - default=None, metadata={"help": "maximum number of tokens in a batch"} - ) - batch_size: Optional[int] = field( - default=None, - metadata={ - "help": "number of examples in a batch", - "argparse_alias": "--max-sentences", - }, - ) - required_batch_size_multiple: int = field( - default=8, metadata={"help": "batch size will be a multiplier of this value"} - ) - required_seq_len_multiple: int = field( - default=1, - metadata={ - "help": "maximum sequence length in batch will be a multiplier of this value" - }, - ) - dataset_impl: Optional[DATASET_IMPL_CHOICES] = field( - default=None, metadata={"help": "output dataset implementation"} - ) - data_buffer_size: int = field( - default=10, metadata={"help": "Number of batches to preload"} - ) - train_subset: str = field( - default="train", - metadata={"help": "data subset to use for training (e.g. train, valid, test)"}, - ) - valid_subset: str = field( - default="valid", - metadata={ - "help": "comma separated list of data subsets to use for validation" - " (e.g. train, valid, test)" - }, - ) - combine_valid_subsets: Optional[bool] = field( - default=None, - metadata={ - "help": "comma separated list of data subsets to use for validation" - " (e.g. train, valid, test)", - "argparse_alias": "--combine-val", - }, - ) - ignore_unused_valid_subsets: Optional[bool] = field( - default=False, - metadata={"help": "do not raise error if valid subsets are ignored"}, - ) - - validate_interval: int = field( - default=1, metadata={"help": "validate every N epochs"} - ) - validate_interval_updates: int = field( - default=0, metadata={"help": "validate every N updates"} - ) - validate_after_updates: int = field( - default=0, metadata={"help": "dont validate until reaching this many updates"} - ) - fixed_validation_seed: Optional[int] = field( - default=None, metadata={"help": "specified random seed for validation"} - ) - disable_validation: bool = field( - default=False, metadata={"help": "disable validation"} - ) - max_tokens_valid: Optional[int] = field( - default=II("dataset.max_tokens"), - metadata={ - "help": "maximum number of tokens in a validation batch" - " (defaults to --max-tokens)" - }, - ) - batch_size_valid: Optional[int] = field( - default=II("dataset.batch_size"), - metadata={ - "help": "batch size of the validation batch (defaults to --batch-size)", - "argparse_alias": "--max-sentences-valid", - }, - ) - max_valid_steps: Optional[int] = field(default=None, metadata={'help': 'How many batches to evaluate', - "argparse_alias": "--nval"}) - curriculum: int = field( - default=0, metadata={"help": "don't shuffle batches for first N epochs"} - ) - gen_subset: str = field( - default="test", - metadata={"help": "data subset to generate (train, valid, test)"}, - ) - num_shards: int = field( - default=1, metadata={"help": "shard generation over N shards"} - ) - shard_id: int = field( - default=0, metadata={"help": "id of the shard to generate (id < num_shards)"} - ) - - -@dataclass -class OptimizationConfig(FairseqDataclass): - max_epoch: int = field( - default=0, metadata={"help": "force stop training at specified epoch"} - ) - max_update: int = field( - default=0, metadata={"help": "force stop training at specified update"} - ) - stop_time_hours: float = field( - default=0, - metadata={ - "help": "force stop training after specified cumulative time (if >0)" - }, - ) - clip_norm: float = field( - default=0.0, metadata={"help": "clip threshold of gradients"} - ) - sentence_avg: bool = field( - default=False, - metadata={ - "help": "normalize gradients by the number of sentences in a batch" - " (default is to normalize by number of tokens)" - }, - ) - update_freq: List[int] = field( - default_factory=lambda: [1], - metadata={"help": "update parameters every N_i batches, when in epoch i"}, - ) - lr: List[float] = field( - default_factory=lambda: [0.25], - metadata={ - "help": "learning rate for the first N epochs; all epochs >N using LR_N" - " (note: this may be interpreted differently depending on --lr-scheduler)" - }, - ) - stop_min_lr: float = field( - default=-1.0, - metadata={"help": "stop training when the learning rate reaches this minimum"}, - ) - use_bmuf: bool = field( - default=False, - metadata={ - "help": "specify global optimizer for syncing models on different GPUs/shards" - }, - ) - - -@dataclass -class CheckpointConfig(FairseqDataclass): - save_dir: str = field( - default="checkpoints", metadata={"help": "path to save checkpoints"} - ) - restore_file: str = field( - default="checkpoint_last.pt", - metadata={ - "help": "filename from which to load checkpoint " - "(default: /checkpoint_last.pt" - }, - ) - finetune_from_model: Optional[str] = field( - default=None, - metadata={ - "help": "finetune from a pretrained model; note that meters and lr scheduler will be reset" - }, - ) - reset_dataloader: bool = field( - default=False, - metadata={ - "help": "if set, does not reload dataloader state from the checkpoint" - }, - ) - reset_lr_scheduler: bool = field( - default=False, - metadata={ - "help": "if set, does not load lr scheduler state from the checkpoint" - }, - ) - reset_meters: bool = field( - default=False, - metadata={"help": "if set, does not load meters from the checkpoint"}, - ) - reset_optimizer: bool = field( - default=False, - metadata={"help": "if set, does not load optimizer state from the checkpoint"}, - ) - optimizer_overrides: str = field( - default="{}", - metadata={ - "help": "a dictionary used to override optimizer args when loading a checkpoint" - }, - ) - save_interval: int = field( - default=1, metadata={"help": "save a checkpoint every N epochs"} - ) - save_interval_updates: int = field( - default=0, metadata={"help": "save a checkpoint (and validate) every N updates"} - ) - keep_interval_updates: int = field( - default=-1, - metadata={ - "help": "keep the last N checkpoints saved with --save-interval-updates" - }, - ) - keep_interval_updates_pattern: int = field( - default=-1, - metadata={ - "help": "when used with --keep-interval-updates, skips deleting " - "any checkpoints with update X where " - "X %% keep_interval_updates_pattern == 0" - }, - ) - keep_last_epochs: int = field( - default=-1, metadata={"help": "keep last N epoch checkpoints"} - ) - keep_best_checkpoints: int = field( - default=-1, metadata={"help": "keep best N checkpoints based on scores"} - ) - no_save: bool = field( - default=False, metadata={"help": "don't save models or checkpoints"} - ) - no_epoch_checkpoints: bool = field( - default=False, metadata={"help": "only store last and best checkpoints"} - ) - no_last_checkpoints: bool = field( - default=False, metadata={"help": "don't store last checkpoints"} - ) - no_save_optimizer_state: bool = field( - default=False, - metadata={"help": "don't save optimizer-state as part of checkpoint"}, - ) - best_checkpoint_metric: str = field( - default="loss", metadata={"help": 'metric to use for saving "best" checkpoints'} - ) - maximize_best_checkpoint_metric: bool = field( - default=False, - metadata={ - "help": 'select the largest metric value for saving "best" checkpoints' - }, - ) - patience: int = field( - default=-1, - metadata={ - "help": ( - "early stop training if valid performance doesn't " - "improve for N consecutive validation runs; note " - "that this is influenced by --validate-interval" - ) - }, - ) - checkpoint_suffix: str = field( - default="", metadata={"help": "suffix to add to the checkpoint file name"} - ) - checkpoint_shard_count: int = field( - default=1, - metadata={ - "help": "Number of shards containing the checkpoint - " - "if the checkpoint is over 300GB, it is preferable " - "to split it into shards to prevent OOM on CPU while loading " - "the checkpoint" - }, - ) - load_checkpoint_on_all_dp_ranks: bool = field( - default=False, - metadata={ - "help": "load checkpoints on all data parallel devices " - "(default: only load on rank 0 and broadcast to other devices)" - }, - ) - write_checkpoints_asynchronously: bool = field( - default=False, - metadata={ - "help": ( - "Write checkpoints asynchronously in a separate " - "thread. NOTE: This feature is currently being tested." - ), - "argparse_alias": "--save-async", - }, - ) - model_parallel_size: int = II("common.model_parallel_size") - - -@dataclass -class FairseqBMUFConfig(FairseqDataclass): - block_lr: float = field( - default=1, metadata={"help": "block learning rate for bmuf"} - ) - block_momentum: float = field( - default=0.875, metadata={"help": "block momentum for bmuf"} - ) - global_sync_iter: int = field( - default=50, metadata={"help": "Iteration for syncing global model"} - ) - warmup_iterations: int = field( - default=500, metadata={"help": "warmup iterations for model to broadcast"} - ) - use_nbm: bool = field( - default=False, - metadata={"help": "Specify whether you want to use classical BM / Nesterov BM"}, - ) - average_sync: bool = field( - default=False, - metadata={ - "help": "Specify whether you want to average the local momentum after each sync" - }, - ) - distributed_world_size: int = II("distributed_training.distributed_world_size") - - -@dataclass -class GenerationConfig(FairseqDataclass): - beam: int = field( - default=5, metadata={"help": "beam size"}, - ) - nbest: int = field( - default=1, metadata={"help": "number of hypotheses to output"}, - ) - max_len_a: float = field( - default=0, - metadata={ - "help": "generate sequences of maximum length ax + b, where x is the source length" - }, - ) - max_len_b: int = field( - default=200, - metadata={ - "help": "generate sequences of maximum length ax + b, where x is the source length" - }, - ) - min_len: int = field( - default=1, metadata={"help": "minimum generation length"}, - ) - match_source_len: bool = field( - default=False, metadata={"help": "generations should match the source length"}, - ) - unnormalized: bool = field( - default=False, metadata={"help": "compare unnormalized hypothesis scores"}, - ) - no_early_stop: bool = field( - default=False, metadata={"help": "deprecated"}, - ) - no_beamable_mm: bool = field( - default=False, metadata={"help": "don't use BeamableMM in attention layers"}, - ) - lenpen: float = field( - default=1, - metadata={ - "help": "length penalty: <1.0 favors shorter, >1.0 favors longer sentences" - }, - ) - unkpen: float = field( - default=0, - metadata={ - "help": "unknown word penalty: <0 produces more unks, >0 produces fewer" - }, - ) - replace_unk: Optional[str] = field( - default=None, - metadata={ - "help": "perform unknown replacement (optionally with alignment dictionary)", - "argparse_const": "@@ ", - }, - ) - sacrebleu: bool = field( - default=False, metadata={"help": "score with sacrebleu"}, - ) - score_reference: bool = field( - default=False, metadata={"help": "just score the reference translation"}, - ) - prefix_size: int = field( - default=0, - metadata={"help": "initialize generation by target prefix of given length"}, - ) - no_repeat_ngram_size: int = field( - default=0, - metadata={ - "help": "ngram blocking such that this size ngram cannot be repeated in the generation" - }, - ) - sampling: bool = field( - default=False, - metadata={"help": "sample hypotheses instead of using beam search"}, - ) - sampling_topk: int = field( - default=-1, - metadata={"help": "sample from top K likely next words instead of all words"}, - ) - sampling_topp: float = field( - default=-1.0, - metadata={ - "help": "sample from the smallest set whose cumulative probability mass exceeds p for next words" - }, - ) - constraints: Optional[GENERATION_CONSTRAINTS_CHOICES] = field( - default=None, - metadata={ - "help": "enables lexically constrained decoding", - "argparse_const": "ordered", - }, - ) - temperature: float = field( - default=1.0, metadata={"help": "temperature for generation"}, - ) - diverse_beam_groups: int = field( - default=-1, metadata={"help": "number of groups for Diverse Beam Search"}, - ) - diverse_beam_strength: float = field( - default=0.5, - metadata={"help": "strength of diversity penalty for Diverse Beam Search"}, - ) - diversity_rate: float = field( - default=-1.0, - metadata={"help": "strength of diversity penalty for Diverse Siblings Search"}, - ) - print_alignment: Optional[PRINT_ALIGNMENT_CHOICES] = field( - default=None, - metadata={ - "help": "if set, uses attention feedback to compute and print alignment to source tokens " - "(valid options are: hard, soft, otherwise treated as hard alignment)", - "argparse_const": "hard", - }, - ) - print_step: bool = field( - default=False, metadata={"help": "print steps"}, - ) - lm_path: Optional[str] = field( - default=None, metadata={"help": "path to lm checkpoint for lm fusion"}, - ) - lm_weight: float = field( - default=0.0, metadata={"help": "weight for lm probs for lm fusion"}, - ) - - # arguments for iterative refinement generator - iter_decode_eos_penalty: float = field( - default=0.0, - metadata={"help": "if > 0.0, it penalized early-stopping in decoding."}, - ) - iter_decode_max_iter: int = field( - default=10, metadata={"help": "maximum iterations for iterative refinement."}, - ) - iter_decode_force_max_iter: bool = field( - default=False, - metadata={ - "help": "if set, run exact the maximum number of iterations without early stop" - }, - ) - iter_decode_with_beam: int = field( - default=1, - metadata={ - "help": "if > 1, model will generate translations varying by the lengths." - }, - ) - iter_decode_with_external_reranker: bool = field( - default=False, - metadata={ - "help": "if set, the last checkpoint are assumed to be a reranker to rescore the translations" - }, - ) - retain_iter_history: bool = field( - default=False, - metadata={ - "help": "if set, decoding returns the whole history of iterative refinement" - }, - ) - retain_dropout: bool = field( - default=False, metadata={"help": "Use dropout at inference time"}, - ) - # temporarily set to Any until https://github.com/facebookresearch/hydra/issues/1117 is fixed - # retain_dropout_modules: Optional[List[str]] = field( - retain_dropout_modules: Any = field( - default=None, - metadata={ - "help": "if set, only retain dropout for the specified modules; " - "if not set, then dropout will be retained for all modules" - }, - ) - # special decoding format for advanced decoding. - decoding_format: Optional[GENERATION_DECODING_FORMAT_CHOICES] = field( - default=None, - metadata={"help": "special decoding format for advanced decoding."}, - ) - no_seed_provided: bool = field( - default=False, - metadata={"help": "if set, dont use seed for initializing random generators"}, - ) - - -@dataclass -class CommonEvalConfig(FairseqDataclass): - path: Optional[str] = field( - default=None, metadata={"help": "path(s) to model file(s), colon separated"}, - ) - post_process: Optional[str] = field( - default=None, - metadata={ - "help": ( - "post-process text by removing BPE, letter segmentation, etc. " - "Valid options can be found in fairseq.data.utils.post_process." - ), - "argparse_const": "subword_nmt", - "argparse_alias": "--remove-bpe", - }, - ) - quiet: bool = field(default=False, metadata={"help": "only print final scores"}) - model_overrides: str = field( - default="{}", - metadata={ - "help": "a dictionary used to override model args at generation that were used during model training" - }, - ) - results_path: Optional[str] = field( - default=None, metadata={"help": "path to save eval results (optional)"} - ) - - -@dataclass -class EvalLMConfig(FairseqDataclass): - output_word_probs: bool = field( - default=False, - metadata={ - "help": "if set, outputs words and their predicted log probabilities to standard output" - }, - ) - output_word_stats: bool = field( - default=False, - metadata={ - "help": "if set, outputs word statistics such as word count, average probability, etc" - }, - ) - context_window: int = field( - default=0, - metadata={ - "help": "ensures that every evaluated token has access to a context of at least this size, if possible" - }, - ) - softmax_batch: int = field( - default=sys.maxsize, - metadata={ - "help": "if BxT is more than this, will batch the softmax over vocab to this amount of tokens, in order to fit into GPU memory" - }, - ) - - -@dataclass -class InteractiveConfig(FairseqDataclass): - buffer_size: int = field( - default=0, - metadata={ - "help": "read this many sentences into a buffer before processing them" - }, - ) - input: str = field( - default="-", metadata={"help": "file to read from; use - for stdin"}, - ) - - -@dataclass -class FairseqConfig(FairseqDataclass): - common: CommonConfig = CommonConfig() - common_eval: CommonEvalConfig = CommonEvalConfig() - distributed_training: DistributedTrainingConfig = DistributedTrainingConfig() - dataset: DatasetConfig = DatasetConfig() - optimization: OptimizationConfig = OptimizationConfig() - checkpoint: CheckpointConfig = CheckpointConfig() - bmuf: FairseqBMUFConfig = FairseqBMUFConfig() - generation: GenerationConfig = GenerationConfig() - eval_lm: EvalLMConfig = EvalLMConfig() - interactive: InteractiveConfig = InteractiveConfig() - model: Any = MISSING - task: Any = None - criterion: Any = None - optimizer: Any = None - lr_scheduler: Any = None - scoring: Any = None - bpe: Any = None - tokenizer: Any = None diff --git a/spaces/gradio/text_analysis/README.md b/spaces/gradio/text_analysis/README.md deleted file mode 100644 index f0cdbc949e2fad8bbef651127e37c29aab4e639e..0000000000000000000000000000000000000000 --- a/spaces/gradio/text_analysis/README.md +++ /dev/null @@ -1,12 +0,0 @@ - ---- -title: text_analysis -emoji: 🔥 -colorFrom: indigo -colorTo: indigo -sdk: gradio -sdk_version: 4.1.2 -app_file: run.py -pinned: false -hf_oauth: true ---- diff --git a/spaces/gshotwell/multi-query-sentiment/www/theme.css b/spaces/gshotwell/multi-query-sentiment/www/theme.css deleted file mode 100644 index 8c41bea6eb9c727f8d2b0bb2c4694bfc76c74a46..0000000000000000000000000000000000000000 --- a/spaces/gshotwell/multi-query-sentiment/www/theme.css +++ /dev/null @@ -1,15143 +0,0 @@ -/*! - * Bootstrap v5.2.2 (https://getbootstrap.com/) - * Copyright 2011-2022 The Bootstrap Authors - * Copyright 2011-2022 Twitter, Inc. - * Licensed under MIT (https://github.com/twbs/bootstrap/blob/main/LICENSE) - */ -@import url("font.css"); - -:root { - --bs-blue: #325d88; - --bs-indigo: #6610f2; - --bs-purple: #6f42c1; - --bs-pink: #e83e8c; - --bs-red: #d9534f; - --bs-orange: #f47c3c; - --bs-yellow: #ffc107; - --bs-green: #93c54b; - --bs-teal: #20c997; - --bs-cyan: #29abe0; - --bs-black: #000; - --bs-white: #fff; - --bs-gray: #8e8c84; - --bs-gray-dark: #3e3f3a; - --bs-gray-100: #f8f9fa; - --bs-gray-200: #f8f5f0; - --bs-gray-300: #dfd7ca; - --bs-gray-400: #ced4da; - --bs-gray-500: #98978b; - --bs-gray-600: #8e8c84; - --bs-gray-700: #495057; - --bs-gray-800: #3e3f3a; - --bs-gray-900: #212529; - --bs-default: #8e8c84; - --bs-primary: #325d88; - --bs-secondary: #8e8c84; - --bs-success: #93c54b; - --bs-info: #29abe0; - --bs-warning: #f47c3c; - --bs-danger: #d9534f; - --bs-light: #f8f5f0; - --bs-dark: #3e3f3a; - --bs-default-rgb: 142, 140, 132; - --bs-primary-rgb: 50, 93, 136; - --bs-secondary-rgb: 142, 140, 132; - --bs-success-rgb: 147, 197, 75; - --bs-info-rgb: 41, 171, 224; - --bs-warning-rgb: 244, 124, 60; - --bs-danger-rgb: 217, 83, 79; - --bs-light-rgb: 248, 245, 240; - --bs-dark-rgb: 62, 63, 58; - --bs-white-rgb: 255, 255, 255; - --bs-black-rgb: 0, 0, 0; - --bs-body-color-rgb: 62, 63, 58; - --bs-body-bg-rgb: 255, 255, 255; - --bs-font-sans-serif: Roboto, -apple-system, BlinkMacSystemFont, "Segoe UI", "Helvetica Neue", Arial, sans-serif, "Apple Color Emoji", "Segoe UI Emoji", "Segoe UI Symbol"; - --bs-font-monospace: SFMono-Regular, Menlo, Monaco, Consolas, "Liberation Mono", "Courier New", monospace; - --bs-gradient: linear-gradient(180deg, rgba(255, 255, 255, 0.15), rgba(255, 255, 255, 0)); - --bs-body-font-family: var(--bs-font-sans-serif); - --bs-body-font-size: 1rem; - --bs-body-font-weight: 400; - --bs-body-line-height: 1.5; - --bs-body-color: #3e3f3a; - --bs-body-bg: #fff; - --bs-border-width: 1px; - --bs-border-style: solid; - --bs-border-color: #dfd7ca; - --bs-border-color-translucent: rgba(0, 0, 0, 0.175); - --bs-border-radius: .375rem; - --bs-border-radius-sm: .25rem; - --bs-border-radius-lg: .5rem; - --bs-border-radius-xl: 1rem; - --bs-border-radius-2xl: 2rem; - --bs-border-radius-pill: 50rem; - --bs-link-color: #93c54b; - --bs-link-hover-color: #769e3c; - --bs-code-color: #000; - --bs-highlight-bg: #fff3cd -} - -*, -*::before, -*::after { - box-sizing: border-box -} - -@media (prefers-reduced-motion: no-preference) { - :root { - scroll-behavior: smooth - } -} - -body { - margin: 0; - font-family: var(--bs-body-font-family); - font-size: var(--bs-body-font-size); - font-weight: var(--bs-body-font-weight); - line-height: var(--bs-body-line-height); - color: var(--bs-body-color); - text-align: var(--bs-body-text-align); - background-color: var(--bs-body-bg); - -webkit-text-size-adjust: 100%; - -webkit-tap-highlight-color: rgba(0, 0, 0, 0) -} - -hr { - margin: 1rem 0; - color: inherit; - border: 0; - border-top: 1px solid; - opacity: .25 -} - -h6, -.h6, -h5, -.h5, -h4, -.h4, -h3, -.h3, -h2, -.h2, -h1, -.h1 { - margin-top: 0; - margin-bottom: .5rem; - font-weight: 400; - line-height: 1.2 -} - -h1, -.h1 { - font-size: calc(1.375rem + 1.5vw) -} - -@media (min-width: 1200px) { - - h1, - .h1 { - font-size: 2.5rem - } -} - -h2, -.h2 { - font-size: calc(1.325rem + .9vw) -} - -@media (min-width: 1200px) { - - h2, - .h2 { - font-size: 2rem - } -} - -h3, -.h3 { - font-size: calc(1.3rem + .6vw) -} - -@media (min-width: 1200px) { - - h3, - .h3 { - font-size: 1.75rem - } -} - -h4, -.h4 { - font-size: calc(1.275rem + .3vw) -} - -@media (min-width: 1200px) { - - h4, - .h4 { - font-size: 1.5rem - } -} - -h5, -.h5 { - font-size: 1.25rem -} - -h6, -.h6 { - font-size: 1rem -} - -p { - margin-top: 0; - margin-bottom: 1rem -} - -abbr[title] { - text-decoration: underline dotted; - -webkit-text-decoration: underline dotted; - -moz-text-decoration: underline dotted; - -ms-text-decoration: underline dotted; - -o-text-decoration: underline dotted; - cursor: help; - text-decoration-skip-ink: none -} - -address { - margin-bottom: 1rem; - font-style: normal; - line-height: inherit -} - -ol, -ul { - padding-left: 2rem -} - -ol, -ul, -dl { - margin-top: 0; - margin-bottom: 1rem -} - -ol ol, -ul ul, -ol ul, -ul ol { - margin-bottom: 0 -} - -dt { - font-weight: 700 -} - -dd { - margin-bottom: .5rem; - margin-left: 0 -} - -blockquote { - margin: 0 0 1rem; - padding: .625rem 1.25rem; - border-left: .25rem solid #f8f5f0 -} - -blockquote p:last-child, -blockquote ul:last-child, -blockquote ol:last-child { - margin-bottom: 0 -} - -b, -strong { - font-weight: bolder -} - -small, -.small { - font-size: .875em -} - -mark, -.mark { - padding: .1875em; - background-color: var(--bs-highlight-bg) -} - -sub, -sup { - position: relative; - font-size: .75em; - line-height: 0; - vertical-align: baseline -} - -sub { - bottom: -.25em -} - -sup { - top: -.5em -} - -a { - color: var(--bs-link-color); - text-decoration: underline; - -webkit-text-decoration: underline; - -moz-text-decoration: underline; - -ms-text-decoration: underline; - -o-text-decoration: underline -} - -a:hover { - color: var(--bs-link-hover-color) -} - -a:not([href]):not([class]), -a:not([href]):not([class]):hover { - color: inherit; - text-decoration: none -} - -pre, -code, -kbd, -samp { - font-family: var(--bs-font-monospace); - font-size: 1em -} - -pre { - display: block; - margin-top: 0; - margin-bottom: 1rem; - overflow: auto; - font-size: .875em; - color: #000; - background-color: #f7f7f7; - padding: .5rem; - border: 1px solid #dfd7ca; - border-radius: .375rem -} - -pre code { - background-color: transparent; - font-size: inherit; - color: inherit; - word-break: normal -} - -code { - font-size: .875em; - color: var(--bs-code-color); - background-color: #f7f7f7; - border-radius: .375rem; - padding: .125rem .25rem; - word-wrap: break-word -} - -a>code { - color: inherit -} - -kbd { - padding: .1875rem .375rem; - font-size: .875em; - color: var(--bs-body-bg); - background-color: var(--bs-body-color); - border-radius: .25rem -} - -kbd kbd { - padding: 0; - font-size: 1em -} - -figure { - margin: 0 0 1rem -} - -img, -svg { - vertical-align: middle -} - -table { - caption-side: bottom; - border-collapse: collapse -} - -caption { - padding-top: .5rem; - padding-bottom: .5rem; - color: #8e8c84; - text-align: left -} - -th { - text-align: inherit; - text-align: -webkit-match-parent -} - -thead, -tbody, -tfoot, -tr, -td, -th { - border-color: inherit; - border-style: solid; - border-width: 0 -} - -label { - display: inline-block -} - -button { - border-radius: 0 -} - -button:focus:not(:focus-visible) { - outline: 0 -} - -input, -button, -select, -optgroup, -textarea { - margin: 0; - font-family: inherit; - font-size: inherit; - line-height: inherit -} - -button, -select { - text-transform: none -} - -[role="button"] { - cursor: pointer -} - -select { - word-wrap: normal -} - -select:disabled { - opacity: 1 -} - -[list]:not([type="date"]):not([type="datetime-local"]):not([type="month"]):not([type="week"]):not([type="time"])::-webkit-calendar-picker-indicator { - display: none !important -} - -button, -[type="button"], -[type="reset"], -[type="submit"] { - -webkit-appearance: button -} - -button:not(:disabled), -[type="button"]:not(:disabled), -[type="reset"]:not(:disabled), -[type="submit"]:not(:disabled) { - cursor: pointer -} - -::-moz-focus-inner { - padding: 0; - border-style: none -} - -textarea { - resize: vertical -} - -fieldset { - min-width: 0; - padding: 0; - margin: 0; - border: 0 -} - -legend { - float: left; - width: 100%; - padding: 0; - margin-bottom: .5rem; - font-size: calc(1.275rem + .3vw); - line-height: inherit -} - -@media (min-width: 1200px) { - legend { - font-size: 1.5rem - } -} - -legend+* { - clear: left -} - -::-webkit-datetime-edit-fields-wrapper, -::-webkit-datetime-edit-text, -::-webkit-datetime-edit-minute, -::-webkit-datetime-edit-hour-field, -::-webkit-datetime-edit-day-field, -::-webkit-datetime-edit-month-field, -::-webkit-datetime-edit-year-field { - padding: 0 -} - -::-webkit-inner-spin-button { - height: auto -} - -[type="search"] { - outline-offset: -2px; - -webkit-appearance: textfield -} - -::-webkit-search-decoration { - -webkit-appearance: none -} - -::-webkit-color-swatch-wrapper { - padding: 0 -} - -::file-selector-button { - font: inherit; - -webkit-appearance: button -} - -output { - display: inline-block -} - -iframe { - border: 0 -} - -summary { - display: list-item; - cursor: pointer -} - -progress { - vertical-align: baseline -} - -[hidden] { - display: none !important -} - -.lead { - font-size: 1.25rem; - font-weight: 300 -} - -.display-1 { - font-size: calc(1.625rem + 4.5vw); - font-weight: 300; - line-height: 1.2 -} - -@media (min-width: 1200px) { - .display-1 { - font-size: 5rem - } -} - -.display-2 { - font-size: calc(1.575rem + 3.9vw); - font-weight: 300; - line-height: 1.2 -} - -@media (min-width: 1200px) { - .display-2 { - font-size: 4.5rem - } -} - -.display-3 { - font-size: calc(1.525rem + 3.3vw); - font-weight: 300; - line-height: 1.2 -} - -@media (min-width: 1200px) { - .display-3 { - font-size: 4rem - } -} - -.display-4 { - font-size: calc(1.475rem + 2.7vw); - font-weight: 300; - line-height: 1.2 -} - -@media (min-width: 1200px) { - .display-4 { - font-size: 3.5rem - } -} - -.display-5 { - font-size: calc(1.425rem + 2.1vw); - font-weight: 300; - line-height: 1.2 -} - -@media (min-width: 1200px) { - .display-5 { - font-size: 3rem - } -} - -.display-6 { - font-size: calc(1.375rem + 1.5vw); - font-weight: 300; - line-height: 1.2 -} - -@media (min-width: 1200px) { - .display-6 { - font-size: 2.5rem - } -} - -.list-unstyled { - padding-left: 0; - list-style: none -} - -.list-inline { - padding-left: 0; - list-style: none -} - -.list-inline-item { - display: inline-block -} - -.list-inline-item:not(:last-child) { - margin-right: .5rem -} - -.initialism { - font-size: .875em; - text-transform: uppercase -} - -.blockquote { - margin-bottom: 1rem; - font-size: 1.25rem -} - -.blockquote>:last-child { - margin-bottom: 0 -} - -.blockquote-footer { - margin-top: -1rem; - margin-bottom: 1rem; - font-size: .875em; - color: #8e8c84 -} - -.blockquote-footer::before { - content: "\2014\00A0" -} - -.img-fluid { - max-width: 100%; - height: auto -} - -.img-thumbnail { - padding: .25rem; - background-color: #fff; - border: 1px solid var(--bs-border-color); - border-radius: .375rem; - max-width: 100%; - height: auto -} - -.figure { - display: inline-block -} - -.figure-img { - margin-bottom: .5rem; - line-height: 1 -} - -.figure-caption { - font-size: .875em; - color: #8e8c84 -} - -.container, -.container-fluid, -.container-xxl, -.container-xl, -.container-lg, -.container-md, -.container-sm { - --bs-gutter-x: 1.5rem; - --bs-gutter-y: 0; - width: 100%; - padding-right: calc(var(--bs-gutter-x) * .5); - padding-left: calc(var(--bs-gutter-x) * .5); - margin-right: auto; - margin-left: auto -} - -@media (min-width: 576px) { - - .container-sm, - .container { - max-width: 540px - } -} - -@media (min-width: 768px) { - - .container-md, - .container-sm, - .container { - max-width: 720px - } -} - -@media (min-width: 992px) { - - .container-lg, - .container-md, - .container-sm, - .container { - max-width: 960px - } -} - -@media (min-width: 1200px) { - - .container-xl, - .container-lg, - .container-md, - .container-sm, - .container { - max-width: 1140px - } -} - -@media (min-width: 1400px) { - - .container-xxl, - .container-xl, - .container-lg, - .container-md, - .container-sm, - .container { - max-width: 1320px - } -} - -.row { - --bs-gutter-x: 1.5rem; - --bs-gutter-y: 0; - display: flex; - display: -webkit-flex; - flex-wrap: wrap; - -webkit-flex-wrap: wrap; - margin-top: calc(-1 * var(--bs-gutter-y)); - margin-right: calc(-.5 * var(--bs-gutter-x)); - margin-left: calc(-.5 * var(--bs-gutter-x)); -} - -.row>* { - flex-shrink: 0; - -webkit-flex-shrink: 0; - width: 100%; - max-width: 100%; - padding-right: calc(var(--bs-gutter-x) * .5); - padding-left: calc(var(--bs-gutter-x) * .5); - margin-top: var(--bs-gutter-y) -} - -.col { - flex: 1 0 0%; - -webkit-flex: 1 0 0% -} - -.row-cols-auto>* { - flex: 0 0 auto; - -webkit-flex: 0 0 auto; - width: auto -} - -.row-cols-1>* { - flex: 0 0 auto; - -webkit-flex: 0 0 auto; - width: 100% -} - -.row-cols-2>* { - flex: 0 0 auto; - -webkit-flex: 0 0 auto; - width: 50% -} - -.row-cols-3>* { - flex: 0 0 auto; - -webkit-flex: 0 0 auto; - width: 33.33333% -} - -.row-cols-4>* { - flex: 0 0 auto; - -webkit-flex: 0 0 auto; - width: 25% -} - -.row-cols-5>* { - flex: 0 0 auto; - -webkit-flex: 0 0 auto; - width: 20% -} - -.row-cols-6>* { - flex: 0 0 auto; - -webkit-flex: 0 0 auto; - width: 16.66667% -} - -.col-auto { - flex: 0 0 auto; - -webkit-flex: 0 0 auto; - width: auto -} - -.col-1 { - flex: 0 0 auto; - -webkit-flex: 0 0 auto; - width: 8.33333% -} - -.col-2 { - flex: 0 0 auto; - -webkit-flex: 0 0 auto; - width: 16.66667% -} - -.col-3 { - flex: 0 0 auto; - -webkit-flex: 0 0 auto; - width: 25% -} - -.col-4 { - flex: 0 0 auto; - -webkit-flex: 0 0 auto; - width: 33.33333% -} - -.col-5 { - flex: 0 0 auto; - -webkit-flex: 0 0 auto; - width: 41.66667% -} - -.col-6 { - flex: 0 0 auto; - -webkit-flex: 0 0 auto; - width: 50% -} - -.col-7 { - flex: 0 0 auto; - -webkit-flex: 0 0 auto; - width: 58.33333% -} - -.col-8 { - flex: 0 0 auto; - -webkit-flex: 0 0 auto; - width: 66.66667% -} - -.col-9 { - flex: 0 0 auto; - -webkit-flex: 0 0 auto; - width: 75% -} - -.col-10 { - flex: 0 0 auto; - -webkit-flex: 0 0 auto; - width: 83.33333% -} - -.col-11 { - flex: 0 0 auto; - -webkit-flex: 0 0 auto; - width: 91.66667% -} - -.col-12 { - flex: 0 0 auto; - -webkit-flex: 0 0 auto; - width: 100% -} - -.offset-1 { - margin-left: 8.33333% -} - -.offset-2 { - margin-left: 16.66667% -} - -.offset-3 { - margin-left: 25% -} - -.offset-4 { - margin-left: 33.33333% -} - -.offset-5 { - margin-left: 41.66667% -} - -.offset-6 { - margin-left: 50% -} - -.offset-7 { - margin-left: 58.33333% -} - -.offset-8 { - margin-left: 66.66667% -} - -.offset-9 { - margin-left: 75% -} - -.offset-10 { - margin-left: 83.33333% -} - -.offset-11 { - margin-left: 91.66667% -} - -.g-0, -.gx-0 { - --bs-gutter-x: 0 -} - -.g-0, -.gy-0 { - --bs-gutter-y: 0 -} - -.g-1, -.gx-1 { - --bs-gutter-x: .25rem -} - -.g-1, -.gy-1 { - --bs-gutter-y: .25rem -} - -.g-2, -.gx-2 { - --bs-gutter-x: .5rem -} - -.g-2, -.gy-2 { - --bs-gutter-y: .5rem -} - -.g-3, -.gx-3 { - --bs-gutter-x: 1rem -} - -.g-3, -.gy-3 { - --bs-gutter-y: 1rem -} - -.g-4, -.gx-4 { - --bs-gutter-x: 1.5rem -} - -.g-4, -.gy-4 { - --bs-gutter-y: 1.5rem -} - -.g-5, -.gx-5 { - --bs-gutter-x: 3rem -} - -.g-5, -.gy-5 { - --bs-gutter-y: 3rem -} - -@media (min-width: 576px) { - .col-sm { - flex: 1 0 0%; - -webkit-flex: 1 0 0% - } - - .row-cols-sm-auto>* { - flex: 0 0 auto; - -webkit-flex: 0 0 auto; - width: auto - } - - .row-cols-sm-1>* { - flex: 0 0 auto; - -webkit-flex: 0 0 auto; - width: 100% - } - - .row-cols-sm-2>* { - flex: 0 0 auto; - -webkit-flex: 0 0 auto; - width: 50% - } - - .row-cols-sm-3>* { - flex: 0 0 auto; - -webkit-flex: 0 0 auto; - width: 33.33333% - } - - .row-cols-sm-4>* { - flex: 0 0 auto; - -webkit-flex: 0 0 auto; - width: 25% - } - - .row-cols-sm-5>* { - flex: 0 0 auto; - -webkit-flex: 0 0 auto; - width: 20% - } - - .row-cols-sm-6>* { - flex: 0 0 auto; - -webkit-flex: 0 0 auto; - width: 16.66667% - } - - .col-sm-auto { - flex: 0 0 auto; - -webkit-flex: 0 0 auto; - width: auto - } - - .col-sm-1 { - flex: 0 0 auto; - -webkit-flex: 0 0 auto; - width: 8.33333% - } - - .col-sm-2 { - flex: 0 0 auto; - -webkit-flex: 0 0 auto; - width: 16.66667% - } - - .col-sm-3 { - flex: 0 0 auto; - -webkit-flex: 0 0 auto; - width: 25% - } - - .col-sm-4 { - flex: 0 0 auto; - -webkit-flex: 0 0 auto; - width: 33.33333% - } - - .col-sm-5 { - flex: 0 0 auto; - -webkit-flex: 0 0 auto; - width: 41.66667% - } - - .col-sm-6 { - flex: 0 0 auto; - -webkit-flex: 0 0 auto; - width: 50% - } - - .col-sm-7 { - flex: 0 0 auto; - -webkit-flex: 0 0 auto; - width: 58.33333% - } - - .col-sm-8 { - flex: 0 0 auto; - -webkit-flex: 0 0 auto; - width: 66.66667% - } - - .col-sm-9 { - flex: 0 0 auto; - -webkit-flex: 0 0 auto; - width: 75% - } - - .col-sm-10 { - flex: 0 0 auto; - -webkit-flex: 0 0 auto; - width: 83.33333% - } - - .col-sm-11 { - flex: 0 0 auto; - -webkit-flex: 0 0 auto; - width: 91.66667% - } - - .col-sm-12 { - flex: 0 0 auto; - -webkit-flex: 0 0 auto; - width: 100% - } - - .offset-sm-0 { - margin-left: 0 - } - - .offset-sm-1 { - margin-left: 8.33333% - } - - .offset-sm-2 { - margin-left: 16.66667% - } - - .offset-sm-3 { - margin-left: 25% - } - - .offset-sm-4 { - margin-left: 33.33333% - } - - .offset-sm-5 { - margin-left: 41.66667% - } - - .offset-sm-6 { - margin-left: 50% - } - - .offset-sm-7 { - margin-left: 58.33333% - } - - .offset-sm-8 { - margin-left: 66.66667% - } - - .offset-sm-9 { - margin-left: 75% - } - - .offset-sm-10 { - margin-left: 83.33333% - } - - .offset-sm-11 { - margin-left: 91.66667% - } - - .g-sm-0, - .gx-sm-0 { - --bs-gutter-x: 0 - } - - .g-sm-0, - .gy-sm-0 { - --bs-gutter-y: 0 - } - - .g-sm-1, - .gx-sm-1 { - --bs-gutter-x: .25rem - } - - .g-sm-1, - .gy-sm-1 { - --bs-gutter-y: .25rem - } - - .g-sm-2, - .gx-sm-2 { - --bs-gutter-x: .5rem - } - - .g-sm-2, - .gy-sm-2 { - --bs-gutter-y: .5rem - } - - .g-sm-3, - .gx-sm-3 { - --bs-gutter-x: 1rem - } - - .g-sm-3, - .gy-sm-3 { - --bs-gutter-y: 1rem - } - - .g-sm-4, - .gx-sm-4 { - --bs-gutter-x: 1.5rem - } - - .g-sm-4, - .gy-sm-4 { - --bs-gutter-y: 1.5rem - } - - .g-sm-5, - .gx-sm-5 { - --bs-gutter-x: 3rem - } - - .g-sm-5, - .gy-sm-5 { - --bs-gutter-y: 3rem - } -} - -@media (min-width: 768px) { - .col-md { - flex: 1 0 0%; - -webkit-flex: 1 0 0% - } - - .row-cols-md-auto>* { - flex: 0 0 auto; - -webkit-flex: 0 0 auto; - width: auto - } - - .row-cols-md-1>* { - flex: 0 0 auto; - -webkit-flex: 0 0 auto; - width: 100% - } - - .row-cols-md-2>* { - flex: 0 0 auto; - -webkit-flex: 0 0 auto; - width: 50% - } - - .row-cols-md-3>* { - flex: 0 0 auto; - -webkit-flex: 0 0 auto; - width: 33.33333% - } - - .row-cols-md-4>* { - flex: 0 0 auto; - -webkit-flex: 0 0 auto; - width: 25% - } - - .row-cols-md-5>* { - flex: 0 0 auto; - -webkit-flex: 0 0 auto; - width: 20% - } - - .row-cols-md-6>* { - flex: 0 0 auto; - -webkit-flex: 0 0 auto; - width: 16.66667% - } - - .col-md-auto { - flex: 0 0 auto; - -webkit-flex: 0 0 auto; - width: auto - } - - .col-md-1 { - flex: 0 0 auto; - -webkit-flex: 0 0 auto; - width: 8.33333% - } - - .col-md-2 { - flex: 0 0 auto; - -webkit-flex: 0 0 auto; - width: 16.66667% - } - - .col-md-3 { - flex: 0 0 auto; - -webkit-flex: 0 0 auto; - width: 25% - } - - .col-md-4 { - flex: 0 0 auto; - -webkit-flex: 0 0 auto; - width: 33.33333% - } - - .col-md-5 { - flex: 0 0 auto; - -webkit-flex: 0 0 auto; - width: 41.66667% - } - - .col-md-6 { - flex: 0 0 auto; - -webkit-flex: 0 0 auto; - width: 50% - } - - .col-md-7 { - flex: 0 0 auto; - -webkit-flex: 0 0 auto; - width: 58.33333% - } - - .col-md-8 { - flex: 0 0 auto; - -webkit-flex: 0 0 auto; - width: 66.66667% - } - - .col-md-9 { - flex: 0 0 auto; - -webkit-flex: 0 0 auto; - width: 75% - } - - .col-md-10 { - flex: 0 0 auto; - -webkit-flex: 0 0 auto; - width: 83.33333% - } - - .col-md-11 { - flex: 0 0 auto; - -webkit-flex: 0 0 auto; - width: 91.66667% - } - - .col-md-12 { - flex: 0 0 auto; - -webkit-flex: 0 0 auto; - width: 100% - } - - .offset-md-0 { - margin-left: 0 - } - - .offset-md-1 { - margin-left: 8.33333% - } - - .offset-md-2 { - margin-left: 16.66667% - } - - .offset-md-3 { - margin-left: 25% - } - - .offset-md-4 { - margin-left: 33.33333% - } - - .offset-md-5 { - margin-left: 41.66667% - } - - .offset-md-6 { - margin-left: 50% - } - - .offset-md-7 { - margin-left: 58.33333% - } - - .offset-md-8 { - margin-left: 66.66667% - } - - .offset-md-9 { - margin-left: 75% - } - - .offset-md-10 { - margin-left: 83.33333% - } - - .offset-md-11 { - margin-left: 91.66667% - } - - .g-md-0, - .gx-md-0 { - --bs-gutter-x: 0 - } - - .g-md-0, - .gy-md-0 { - --bs-gutter-y: 0 - } - - .g-md-1, - .gx-md-1 { - --bs-gutter-x: .25rem - } - - .g-md-1, - .gy-md-1 { - --bs-gutter-y: .25rem - } - - .g-md-2, - .gx-md-2 { - --bs-gutter-x: .5rem - } - - .g-md-2, - .gy-md-2 { - --bs-gutter-y: .5rem - } - - .g-md-3, - .gx-md-3 { - --bs-gutter-x: 1rem - } - - .g-md-3, - .gy-md-3 { - --bs-gutter-y: 1rem - } - - .g-md-4, - .gx-md-4 { - --bs-gutter-x: 1.5rem - } - - .g-md-4, - .gy-md-4 { - --bs-gutter-y: 1.5rem - } - - .g-md-5, - .gx-md-5 { - --bs-gutter-x: 3rem - } - - .g-md-5, - .gy-md-5 { - --bs-gutter-y: 3rem - } -} - -@media (min-width: 992px) { - .col-lg { - flex: 1 0 0%; - -webkit-flex: 1 0 0% - } - - .row-cols-lg-auto>* { - flex: 0 0 auto; - -webkit-flex: 0 0 auto; - width: auto - } - - .row-cols-lg-1>* { - flex: 0 0 auto; - -webkit-flex: 0 0 auto; - width: 100% - } - - .row-cols-lg-2>* { - flex: 0 0 auto; - -webkit-flex: 0 0 auto; - width: 50% - } - - .row-cols-lg-3>* { - flex: 0 0 auto; - -webkit-flex: 0 0 auto; - width: 33.33333% - } - - .row-cols-lg-4>* { - flex: 0 0 auto; - -webkit-flex: 0 0 auto; - width: 25% - } - - .row-cols-lg-5>* { - flex: 0 0 auto; - -webkit-flex: 0 0 auto; - width: 20% - } - - .row-cols-lg-6>* { - flex: 0 0 auto; - -webkit-flex: 0 0 auto; - width: 16.66667% - } - - .col-lg-auto { - flex: 0 0 auto; - -webkit-flex: 0 0 auto; - width: auto - } - - .col-lg-1 { - flex: 0 0 auto; - -webkit-flex: 0 0 auto; - width: 8.33333% - } - - .col-lg-2 { - flex: 0 0 auto; - -webkit-flex: 0 0 auto; - width: 16.66667% - } - - .col-lg-3 { - flex: 0 0 auto; - -webkit-flex: 0 0 auto; - width: 25% - } - - .col-lg-4 { - flex: 0 0 auto; - -webkit-flex: 0 0 auto; - width: 33.33333% - } - - .col-lg-5 { - flex: 0 0 auto; - -webkit-flex: 0 0 auto; - width: 41.66667% - } - - .col-lg-6 { - flex: 0 0 auto; - -webkit-flex: 0 0 auto; - width: 50% - } - - .col-lg-7 { - flex: 0 0 auto; - -webkit-flex: 0 0 auto; - width: 58.33333% - } - - .col-lg-8 { - flex: 0 0 auto; - -webkit-flex: 0 0 auto; - width: 66.66667% - } - - .col-lg-9 { - flex: 0 0 auto; - -webkit-flex: 0 0 auto; - width: 75% - } - - .col-lg-10 { - flex: 0 0 auto; - -webkit-flex: 0 0 auto; - width: 83.33333% - } - - .col-lg-11 { - flex: 0 0 auto; - -webkit-flex: 0 0 auto; - width: 91.66667% - } - - .col-lg-12 { - flex: 0 0 auto; - -webkit-flex: 0 0 auto; - width: 100% - } - - .offset-lg-0 { - margin-left: 0 - } - - .offset-lg-1 { - margin-left: 8.33333% - } - - .offset-lg-2 { - margin-left: 16.66667% - } - - .offset-lg-3 { - margin-left: 25% - } - - .offset-lg-4 { - margin-left: 33.33333% - } - - .offset-lg-5 { - margin-left: 41.66667% - } - - .offset-lg-6 { - margin-left: 50% - } - - .offset-lg-7 { - margin-left: 58.33333% - } - - .offset-lg-8 { - margin-left: 66.66667% - } - - .offset-lg-9 { - margin-left: 75% - } - - .offset-lg-10 { - margin-left: 83.33333% - } - - .offset-lg-11 { - margin-left: 91.66667% - } - - .g-lg-0, - .gx-lg-0 { - --bs-gutter-x: 0 - } - - .g-lg-0, - .gy-lg-0 { - --bs-gutter-y: 0 - } - - .g-lg-1, - .gx-lg-1 { - --bs-gutter-x: .25rem - } - - .g-lg-1, - .gy-lg-1 { - --bs-gutter-y: .25rem - } - - .g-lg-2, - .gx-lg-2 { - --bs-gutter-x: .5rem - } - - .g-lg-2, - .gy-lg-2 { - --bs-gutter-y: .5rem - } - - .g-lg-3, - .gx-lg-3 { - --bs-gutter-x: 1rem - } - - .g-lg-3, - .gy-lg-3 { - --bs-gutter-y: 1rem - } - - .g-lg-4, - .gx-lg-4 { - --bs-gutter-x: 1.5rem - } - - .g-lg-4, - .gy-lg-4 { - --bs-gutter-y: 1.5rem - } - - .g-lg-5, - .gx-lg-5 { - --bs-gutter-x: 3rem - } - - .g-lg-5, - .gy-lg-5 { - --bs-gutter-y: 3rem - } -} - -@media (min-width: 1200px) { - .col-xl { - flex: 1 0 0%; - -webkit-flex: 1 0 0% - } - - .row-cols-xl-auto>* { - flex: 0 0 auto; - -webkit-flex: 0 0 auto; - width: auto - } - - .row-cols-xl-1>* { - flex: 0 0 auto; - -webkit-flex: 0 0 auto; - width: 100% - } - - .row-cols-xl-2>* { - flex: 0 0 auto; - -webkit-flex: 0 0 auto; - width: 50% - } - - .row-cols-xl-3>* { - flex: 0 0 auto; - -webkit-flex: 0 0 auto; - width: 33.33333% - } - - .row-cols-xl-4>* { - flex: 0 0 auto; - -webkit-flex: 0 0 auto; - width: 25% - } - - .row-cols-xl-5>* { - flex: 0 0 auto; - -webkit-flex: 0 0 auto; - width: 20% - } - - .row-cols-xl-6>* { - flex: 0 0 auto; - -webkit-flex: 0 0 auto; - width: 16.66667% - } - - .col-xl-auto { - flex: 0 0 auto; - -webkit-flex: 0 0 auto; - width: auto - } - - .col-xl-1 { - flex: 0 0 auto; - -webkit-flex: 0 0 auto; - width: 8.33333% - } - - .col-xl-2 { - flex: 0 0 auto; - -webkit-flex: 0 0 auto; - width: 16.66667% - } - - .col-xl-3 { - flex: 0 0 auto; - -webkit-flex: 0 0 auto; - width: 25% - } - - .col-xl-4 { - flex: 0 0 auto; - -webkit-flex: 0 0 auto; - width: 33.33333% - } - - .col-xl-5 { - flex: 0 0 auto; - -webkit-flex: 0 0 auto; - width: 41.66667% - } - - .col-xl-6 { - flex: 0 0 auto; - -webkit-flex: 0 0 auto; - width: 50% - } - - .col-xl-7 { - flex: 0 0 auto; - -webkit-flex: 0 0 auto; - width: 58.33333% - } - - .col-xl-8 { - flex: 0 0 auto; - -webkit-flex: 0 0 auto; - width: 66.66667% - } - - .col-xl-9 { - flex: 0 0 auto; - -webkit-flex: 0 0 auto; - width: 75% - } - - .col-xl-10 { - flex: 0 0 auto; - -webkit-flex: 0 0 auto; - width: 83.33333% - } - - .col-xl-11 { - flex: 0 0 auto; - -webkit-flex: 0 0 auto; - width: 91.66667% - } - - .col-xl-12 { - flex: 0 0 auto; - -webkit-flex: 0 0 auto; - width: 100% - } - - .offset-xl-0 { - margin-left: 0 - } - - .offset-xl-1 { - margin-left: 8.33333% - } - - .offset-xl-2 { - margin-left: 16.66667% - } - - .offset-xl-3 { - margin-left: 25% - } - - .offset-xl-4 { - margin-left: 33.33333% - } - - .offset-xl-5 { - margin-left: 41.66667% - } - - .offset-xl-6 { - margin-left: 50% - } - - .offset-xl-7 { - margin-left: 58.33333% - } - - .offset-xl-8 { - margin-left: 66.66667% - } - - .offset-xl-9 { - margin-left: 75% - } - - .offset-xl-10 { - margin-left: 83.33333% - } - - .offset-xl-11 { - margin-left: 91.66667% - } - - .g-xl-0, - .gx-xl-0 { - --bs-gutter-x: 0 - } - - .g-xl-0, - .gy-xl-0 { - --bs-gutter-y: 0 - } - - .g-xl-1, - .gx-xl-1 { - --bs-gutter-x: .25rem - } - - .g-xl-1, - .gy-xl-1 { - --bs-gutter-y: .25rem - } - - .g-xl-2, - .gx-xl-2 { - --bs-gutter-x: .5rem - } - - .g-xl-2, - .gy-xl-2 { - --bs-gutter-y: .5rem - } - - .g-xl-3, - .gx-xl-3 { - --bs-gutter-x: 1rem - } - - .g-xl-3, - .gy-xl-3 { - --bs-gutter-y: 1rem - } - - .g-xl-4, - .gx-xl-4 { - --bs-gutter-x: 1.5rem - } - - .g-xl-4, - .gy-xl-4 { - --bs-gutter-y: 1.5rem - } - - .g-xl-5, - .gx-xl-5 { - --bs-gutter-x: 3rem - } - - .g-xl-5, - .gy-xl-5 { - --bs-gutter-y: 3rem - } -} - -@media (min-width: 1400px) { - .col-xxl { - flex: 1 0 0%; - -webkit-flex: 1 0 0% - } - - .row-cols-xxl-auto>* { - flex: 0 0 auto; - -webkit-flex: 0 0 auto; - width: auto - } - - .row-cols-xxl-1>* { - flex: 0 0 auto; - -webkit-flex: 0 0 auto; - width: 100% - } - - .row-cols-xxl-2>* { - flex: 0 0 auto; - -webkit-flex: 0 0 auto; - width: 50% - } - - .row-cols-xxl-3>* { - flex: 0 0 auto; - -webkit-flex: 0 0 auto; - width: 33.33333% - } - - .row-cols-xxl-4>* { - flex: 0 0 auto; - -webkit-flex: 0 0 auto; - width: 25% - } - - .row-cols-xxl-5>* { - flex: 0 0 auto; - -webkit-flex: 0 0 auto; - width: 20% - } - - .row-cols-xxl-6>* { - flex: 0 0 auto; - -webkit-flex: 0 0 auto; - width: 16.66667% - } - - .col-xxl-auto { - flex: 0 0 auto; - -webkit-flex: 0 0 auto; - width: auto - } - - .col-xxl-1 { - flex: 0 0 auto; - -webkit-flex: 0 0 auto; - width: 8.33333% - } - - .col-xxl-2 { - flex: 0 0 auto; - -webkit-flex: 0 0 auto; - width: 16.66667% - } - - .col-xxl-3 { - flex: 0 0 auto; - -webkit-flex: 0 0 auto; - width: 25% - } - - .col-xxl-4 { - flex: 0 0 auto; - -webkit-flex: 0 0 auto; - width: 33.33333% - } - - .col-xxl-5 { - flex: 0 0 auto; - -webkit-flex: 0 0 auto; - width: 41.66667% - } - - .col-xxl-6 { - flex: 0 0 auto; - -webkit-flex: 0 0 auto; - width: 50% - } - - .col-xxl-7 { - flex: 0 0 auto; - -webkit-flex: 0 0 auto; - width: 58.33333% - } - - .col-xxl-8 { - flex: 0 0 auto; - -webkit-flex: 0 0 auto; - width: 66.66667% - } - - .col-xxl-9 { - flex: 0 0 auto; - -webkit-flex: 0 0 auto; - width: 75% - } - - .col-xxl-10 { - flex: 0 0 auto; - -webkit-flex: 0 0 auto; - width: 83.33333% - } - - .col-xxl-11 { - flex: 0 0 auto; - -webkit-flex: 0 0 auto; - width: 91.66667% - } - - .col-xxl-12 { - flex: 0 0 auto; - -webkit-flex: 0 0 auto; - width: 100% - } - - .offset-xxl-0 { - margin-left: 0 - } - - .offset-xxl-1 { - margin-left: 8.33333% - } - - .offset-xxl-2 { - margin-left: 16.66667% - } - - .offset-xxl-3 { - margin-left: 25% - } - - .offset-xxl-4 { - margin-left: 33.33333% - } - - .offset-xxl-5 { - margin-left: 41.66667% - } - - .offset-xxl-6 { - margin-left: 50% - } - - .offset-xxl-7 { - margin-left: 58.33333% - } - - .offset-xxl-8 { - margin-left: 66.66667% - } - - .offset-xxl-9 { - margin-left: 75% - } - - .offset-xxl-10 { - margin-left: 83.33333% - } - - .offset-xxl-11 { - margin-left: 91.66667% - } - - .g-xxl-0, - .gx-xxl-0 { - --bs-gutter-x: 0 - } - - .g-xxl-0, - .gy-xxl-0 { - --bs-gutter-y: 0 - } - - .g-xxl-1, - .gx-xxl-1 { - --bs-gutter-x: .25rem - } - - .g-xxl-1, - .gy-xxl-1 { - --bs-gutter-y: .25rem - } - - .g-xxl-2, - .gx-xxl-2 { - --bs-gutter-x: .5rem - } - - .g-xxl-2, - .gy-xxl-2 { - --bs-gutter-y: .5rem - } - - .g-xxl-3, - .gx-xxl-3 { - --bs-gutter-x: 1rem - } - - .g-xxl-3, - .gy-xxl-3 { - --bs-gutter-y: 1rem - } - - .g-xxl-4, - .gx-xxl-4 { - --bs-gutter-x: 1.5rem - } - - .g-xxl-4, - .gy-xxl-4 { - --bs-gutter-y: 1.5rem - } - - .g-xxl-5, - .gx-xxl-5 { - --bs-gutter-x: 3rem - } - - .g-xxl-5, - .gy-xxl-5 { - --bs-gutter-y: 3rem - } -} - -.table { - --bs-table-color: var(--bs-body-color); - --bs-table-bg: rgba(0, 0, 0, 0); - --bs-table-border-color: var(--bs-border-color); - --bs-table-accent-bg: rgba(0, 0, 0, 0); - --bs-table-striped-color: var(--bs-body-color); - --bs-table-striped-bg: rgba(0, 0, 0, 0.05); - --bs-table-active-color: var(--bs-body-color); - --bs-table-active-bg: rgba(0, 0, 0, 0.1); - --bs-table-hover-color: var(--bs-body-color); - --bs-table-hover-bg: rgba(0, 0, 0, 0.075); - width: 100%; - margin-bottom: 1rem; - color: var(--bs-table-color); - vertical-align: top; - border-color: var(--bs-table-border-color) -} - -.table>:not(caption)>*>* { - padding: .5rem .5rem; - background-color: var(--bs-table-bg); - border-bottom-width: 1px; - box-shadow: inset 0 0 0 9999px var(--bs-table-accent-bg) -} - -.table>tbody { - vertical-align: inherit -} - -.table>thead { - vertical-align: bottom -} - -.table-group-divider { - border-top: 2px solid currentcolor -} - -.caption-top { - caption-side: top -} - -.table-sm>:not(caption)>*>* { - padding: .25rem .25rem -} - -.table-bordered>:not(caption)>* { - border-width: 1px 0 -} - -.table-bordered>:not(caption)>*>* { - border-width: 0 1px -} - -.table-borderless>:not(caption)>*>* { - border-bottom-width: 0 -} - -.table-borderless>:not(:first-child) { - border-top-width: 0 -} - -.table-striped>tbody>tr:nth-of-type(odd)>* { - --bs-table-accent-bg: var(--bs-table-striped-bg); - color: var(--bs-table-striped-color) -} - -.table-striped-columns>:not(caption)>tr>:nth-child(even) { - --bs-table-accent-bg: var(--bs-table-striped-bg); - color: var(--bs-table-striped-color) -} - -.table-active { - --bs-table-accent-bg: var(--bs-table-active-bg); - color: var(--bs-table-active-color) -} - -.table-hover>tbody>tr:hover>* { - --bs-table-accent-bg: var(--bs-table-hover-bg); - color: var(--bs-table-hover-color) -} - -.table-primary { - --bs-table-color: #000; - --bs-table-bg: #d6dfe7; - --bs-table-border-color: #c1c9d0; - --bs-table-striped-bg: #cbd4db; - --bs-table-striped-color: #000; - --bs-table-active-bg: #c1c9d0; - --bs-table-active-color: #000; - --bs-table-hover-bg: #c6ced6; - --bs-table-hover-color: #000; - color: var(--bs-table-color); - border-color: var(--bs-table-border-color) -} - -.table-secondary { - --bs-table-color: #000; - --bs-table-bg: #e8e8e6; - --bs-table-border-color: #d1d1cf; - --bs-table-striped-bg: #dcdcdb; - --bs-table-striped-color: #000; - --bs-table-active-bg: #d1d1cf; - --bs-table-active-color: #000; - --bs-table-hover-bg: #d7d7d5; - --bs-table-hover-color: #000; - color: var(--bs-table-color); - border-color: var(--bs-table-border-color) -} - -.table-success { - --bs-table-color: #000; - --bs-table-bg: #e9f3db; - --bs-table-border-color: #d2dbc5; - --bs-table-striped-bg: #dde7d0; - --bs-table-striped-color: #000; - --bs-table-active-bg: #d2dbc5; - --bs-table-active-color: #000; - --bs-table-hover-bg: #d8e1cb; - --bs-table-hover-color: #000; - color: var(--bs-table-color); - border-color: var(--bs-table-border-color) -} - -.table-info { - --bs-table-color: #000; - --bs-table-bg: #d4eef9; - --bs-table-border-color: #bfd6e0; - --bs-table-striped-bg: #c9e2ed; - --bs-table-striped-color: #000; - --bs-table-active-bg: #bfd6e0; - --bs-table-active-color: #000; - --bs-table-hover-bg: #c4dce6; - --bs-table-hover-color: #000; - color: var(--bs-table-color); - border-color: var(--bs-table-border-color) -} - -.table-warning { - --bs-table-color: #000; - --bs-table-bg: #fde5d8; - --bs-table-border-color: #e4cec2; - --bs-table-striped-bg: #f0dacd; - --bs-table-striped-color: #000; - --bs-table-active-bg: #e4cec2; - --bs-table-active-color: #000; - --bs-table-hover-bg: #ead4c8; - --bs-table-hover-color: #000; - color: var(--bs-table-color); - border-color: var(--bs-table-border-color) -} - -.table-danger { - --bs-table-color: #000; - --bs-table-bg: #f7dddc; - --bs-table-border-color: #dec7c6; - --bs-table-striped-bg: #ebd2d1; - --bs-table-striped-color: #000; - --bs-table-active-bg: #dec7c6; - --bs-table-active-color: #000; - --bs-table-hover-bg: #e4cccc; - --bs-table-hover-color: #000; - color: var(--bs-table-color); - border-color: var(--bs-table-border-color) -} - -.table-light { - --bs-table-color: #000; - --bs-table-bg: #f8f5f0; - --bs-table-border-color: #dfddd8; - --bs-table-striped-bg: #ece9e4; - --bs-table-striped-color: #000; - --bs-table-active-bg: #dfddd8; - --bs-table-active-color: #000; - --bs-table-hover-bg: #e5e3de; - --bs-table-hover-color: #000; - color: var(--bs-table-color); - border-color: var(--bs-table-border-color) -} - -.table-dark { - --bs-table-color: #fff; - --bs-table-bg: #3e3f3a; - --bs-table-border-color: #51524e; - --bs-table-striped-bg: #484944; - --bs-table-striped-color: #fff; - --bs-table-active-bg: #51524e; - --bs-table-active-color: #fff; - --bs-table-hover-bg: #4c4d49; - --bs-table-hover-color: #fff; - color: var(--bs-table-color); - border-color: var(--bs-table-border-color) -} - -.table-responsive { - overflow-x: auto; - -webkit-overflow-scrolling: touch -} - -@media (max-width: 575.98px) { - .table-responsive-sm { - overflow-x: auto; - -webkit-overflow-scrolling: touch - } -} - -@media (max-width: 767.98px) { - .table-responsive-md { - overflow-x: auto; - -webkit-overflow-scrolling: touch - } -} - -@media (max-width: 991.98px) { - .table-responsive-lg { - overflow-x: auto; - -webkit-overflow-scrolling: touch - } -} - -@media (max-width: 1199.98px) { - .table-responsive-xl { - overflow-x: auto; - -webkit-overflow-scrolling: touch - } -} - -@media (max-width: 1399.98px) { - .table-responsive-xxl { - overflow-x: auto; - -webkit-overflow-scrolling: touch - } -} - -.form-label, -.shiny-input-container .control-label { - margin-bottom: .5rem -} - -.col-form-label { - padding-top: calc(.375rem + 1px); - padding-bottom: calc(.375rem + 1px); - margin-bottom: 0; - font-size: inherit; - line-height: 1.5 -} - -.col-form-label-lg { - padding-top: calc(.5rem + 1px); - padding-bottom: calc(.5rem + 1px); - font-size: 1.25rem -} - -.col-form-label-sm { - padding-top: calc(.25rem + 1px); - padding-bottom: calc(.25rem + 1px); - font-size: .875rem -} - -.form-text, -.help-text, -.help-block { - margin-top: .25rem; - font-size: .875em; - color: #8e8c84 -} - -.form-control { - display: block; - width: 100%; - padding: .375rem .75rem; - font-size: 1rem; - font-weight: 400; - line-height: 1.5; - color: #3e3f3a; - background-color: #fff; - background-clip: padding-box; - border: 1px solid #ced4da; - appearance: none; - -webkit-appearance: none; - -moz-appearance: none; - -ms-appearance: none; - -o-appearance: none; - border-radius: .375rem -} - -.form-control[type="file"] { - overflow: hidden -} - -.form-control[type="file"]:not(:disabled):not([readonly]) { - cursor: pointer -} - -.form-control:focus { - color: #3e3f3a; - background-color: #fff; - border-color: #99aec4; - outline: 0; - box-shadow: 0 0 0 .25rem rgba(50, 93, 136, 0.25) -} - -.form-control::-webkit-date-and-time-value { - height: 1.5em -} - -.form-control::placeholder { - color: #8e8c84; - opacity: 1 -} - -.form-control:disabled { - background-color: #f8f5f0; - opacity: 1 -} - -.form-control::file-selector-button { - padding: .375rem .75rem; - margin: -.375rem -.75rem; - margin-inline-end: .75rem; - color: #3e3f3a; - background-color: #f8f5f0; - background-image: var(--bs-gradient); - pointer-events: none; - border-color: inherit; - border-style: solid; - border-width: 0; - border-inline-end-width: 1px; - border-radius: 0 -} - -.form-control:hover:not(:disabled):not([readonly])::file-selector-button { - background-color: #ece9e4 -} - -.form-control-plaintext { - display: block; - width: 100%; - padding: .375rem 0; - margin-bottom: 0; - line-height: 1.5; - color: #3e3f3a; - background-color: transparent; - border: solid transparent; - border-width: 1px 0 -} - -.form-control-plaintext:focus { - outline: 0 -} - -.form-control-plaintext.form-control-sm, -.form-control-plaintext.form-control-lg { - padding-right: 0; - padding-left: 0 -} - -.form-control-sm { - min-height: calc(1.5em + .5rem + 2px); - padding: .25rem .5rem; - font-size: .875rem; - border-radius: .25rem -} - -.form-control-sm::file-selector-button { - padding: .25rem .5rem; - margin: -.25rem -.5rem; - margin-inline-end: .5rem -} - -.form-control-lg { - min-height: calc(1.5em + 1rem + 2px); - padding: .5rem 1rem; - font-size: 1.25rem; - border-radius: .5rem -} - -.form-control-lg::file-selector-button { - padding: .5rem 1rem; - margin: -.5rem -1rem; - margin-inline-end: 1rem -} - -textarea.form-control { - min-height: calc(1.5em + .75rem + 2px) -} - -textarea.form-control-sm { - min-height: calc(1.5em + .5rem + 2px) -} - -textarea.form-control-lg { - min-height: calc(1.5em + 1rem + 2px) -} - -.form-control-color { - width: 3rem; - height: calc(1.5em + .75rem + 2px); - padding: .375rem -} - -.form-control-color:not(:disabled):not([readonly]) { - cursor: pointer -} - -.form-control-color::-moz-color-swatch { - border: 0 !important; - border-radius: .375rem -} - -.form-control-color::-webkit-color-swatch { - border-radius: .375rem -} - -.form-control-color.form-control-sm { - height: calc(1.5em + .5rem + 2px) -} - -.form-control-color.form-control-lg { - height: calc(1.5em + 1rem + 2px) -} - -.form-select { - display: block; - width: 100%; - padding: .375rem 2.25rem .375rem .75rem; - -moz-padding-start: calc(.75rem - 3px); - font-size: 1rem; - font-weight: 400; - line-height: 1.5; - color: #3e3f3a; - background-color: #fff; - background-image: url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 16 16'%3e%3cpath fill='none' stroke='%233e3f3a' stroke-linecap='round' stroke-linejoin='round' stroke-width='2' d='m2 5 6 6 6-6'/%3e%3c/svg%3e"); - background-repeat: no-repeat; - background-position: right .75rem center; - background-size: 16px 12px; - border: 1px solid #ced4da; - border-radius: .375rem; - appearance: none; - -webkit-appearance: none; - -moz-appearance: none; - -ms-appearance: none; - -o-appearance: none -} - -.form-select:focus { - border-color: #99aec4; - outline: 0; - box-shadow: 0 0 0 .25rem rgba(50, 93, 136, 0.25) -} - -.form-select[multiple], -.form-select[size]:not([size="1"]) { - padding-right: .75rem; - background-image: none -} - -.form-select:disabled { - background-color: #f8f5f0 -} - -.form-select:-moz-focusring { - color: transparent; - text-shadow: 0 0 0 #3e3f3a -} - -.form-select-sm { - padding-top: .25rem; - padding-bottom: .25rem; - padding-left: .5rem; - font-size: .875rem; - border-radius: .25rem -} - -.form-select-lg { - padding-top: .5rem; - padding-bottom: .5rem; - padding-left: 1rem; - font-size: 1.25rem; - border-radius: .5rem -} - -.form-check, -.shiny-input-container .checkbox, -.shiny-input-container .radio { - display: block; - min-height: 1.5rem; - padding-left: 0; - margin-bottom: .125rem -} - -.form-check .form-check-input, -.form-check .shiny-input-container .checkbox input, -.form-check .shiny-input-container .radio input, -.shiny-input-container .checkbox .form-check-input, -.shiny-input-container .checkbox .shiny-input-container .checkbox input, -.shiny-input-container .checkbox .shiny-input-container .radio input, -.shiny-input-container .radio .form-check-input, -.shiny-input-container .radio .shiny-input-container .checkbox input, -.shiny-input-container .radio .shiny-input-container .radio input { - float: left; - margin-left: 0 -} - -.form-check-reverse { - padding-right: 0; - padding-left: 0; - text-align: right -} - -.form-check-reverse .form-check-input { - float: right; - margin-right: 0; - margin-left: 0 -} - -.form-check-input, -.shiny-input-container .checkbox input, -.shiny-input-container .checkbox-inline input, -.shiny-input-container .radio input, -.shiny-input-container .radio-inline input { - width: 1em; - height: 1em; - margin-top: .25em; - vertical-align: top; - background-color: #fff; - background-repeat: no-repeat; - background-position: center; - background-size: contain; - border: 1px solid rgba(0, 0, 0, 0.25); - appearance: none; - -webkit-appearance: none; - -moz-appearance: none; - -ms-appearance: none; - -o-appearance: none; - print-color-adjust: exact -} - -.form-check-input[type="checkbox"], -.shiny-input-container .checkbox input[type="checkbox"], -.shiny-input-container .checkbox-inline input[type="checkbox"], -.shiny-input-container .radio input[type="checkbox"], -.shiny-input-container .radio-inline input[type="checkbox"] { - border-radius: .25em -} - -.form-check-input[type="radio"], -.shiny-input-container .checkbox input[type="radio"], -.shiny-input-container .checkbox-inline input[type="radio"], -.shiny-input-container .radio input[type="radio"], -.shiny-input-container .radio-inline input[type="radio"] { - border-radius: 50% -} - -.form-check-input:active, -.shiny-input-container .checkbox input:active, -.shiny-input-container .checkbox-inline input:active, -.shiny-input-container .radio input:active, -.shiny-input-container .radio-inline input:active { - filter: brightness(90%) -} - -.form-check-input:focus, -.shiny-input-container .checkbox input:focus, -.shiny-input-container .checkbox-inline input:focus, -.shiny-input-container .radio input:focus, -.shiny-input-container .radio-inline input:focus { - border-color: #99aec4; - outline: 0; - box-shadow: 0 0 0 .25rem rgba(50, 93, 136, 0.25) -} - -.form-check-input:checked, -.shiny-input-container .checkbox input:checked, -.shiny-input-container .checkbox-inline input:checked, -.shiny-input-container .radio input:checked, -.shiny-input-container .radio-inline input:checked { - background-color: #325d88; - border-color: #325d88 -} - -.form-check-input:checked[type="checkbox"], -.shiny-input-container .checkbox input:checked[type="checkbox"], -.shiny-input-container .checkbox-inline input:checked[type="checkbox"], -.shiny-input-container .radio input:checked[type="checkbox"], -.shiny-input-container .radio-inline input:checked[type="checkbox"] { - background-image: url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 20 20'%3e%3cpath fill='none' stroke='%23fff' stroke-linecap='round' stroke-linejoin='round' stroke-width='3' d='m6 10 3 3 6-6'/%3e%3c/svg%3e"), var(--bs-gradient) -} - -.form-check-input:checked[type="radio"], -.shiny-input-container .checkbox input:checked[type="radio"], -.shiny-input-container .checkbox-inline input:checked[type="radio"], -.shiny-input-container .radio input:checked[type="radio"], -.shiny-input-container .radio-inline input:checked[type="radio"] { - background-image: url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='-4 -4 8 8'%3e%3ccircle r='2' fill='%23fff'/%3e%3c/svg%3e"), var(--bs-gradient) -} - -.form-check-input[type="checkbox"]:indeterminate, -.shiny-input-container .checkbox input[type="checkbox"]:indeterminate, -.shiny-input-container .checkbox-inline input[type="checkbox"]:indeterminate, -.shiny-input-container .radio input[type="checkbox"]:indeterminate, -.shiny-input-container .radio-inline input[type="checkbox"]:indeterminate { - background-color: #325d88; - border-color: #325d88; - background-image: url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 20 20'%3e%3cpath fill='none' stroke='%23fff' stroke-linecap='round' stroke-linejoin='round' stroke-width='3' d='M6 10h8'/%3e%3c/svg%3e"), var(--bs-gradient) -} - -.form-check-input:disabled, -.shiny-input-container .checkbox input:disabled, -.shiny-input-container .checkbox-inline input:disabled, -.shiny-input-container .radio input:disabled, -.shiny-input-container .radio-inline input:disabled { - pointer-events: none; - filter: none; - opacity: .5 -} - -.form-check-input[disabled]~.form-check-label, -.form-check-input[disabled]~span, -.form-check-input:disabled~.form-check-label, -.form-check-input:disabled~span, -.shiny-input-container .checkbox input[disabled]~.form-check-label, -.shiny-input-container .checkbox input[disabled]~span, -.shiny-input-container .checkbox input:disabled~.form-check-label, -.shiny-input-container .checkbox input:disabled~span, -.shiny-input-container .checkbox-inline input[disabled]~.form-check-label, -.shiny-input-container .checkbox-inline input[disabled]~span, -.shiny-input-container .checkbox-inline input:disabled~.form-check-label, -.shiny-input-container .checkbox-inline input:disabled~span, -.shiny-input-container .radio input[disabled]~.form-check-label, -.shiny-input-container .radio input[disabled]~span, -.shiny-input-container .radio input:disabled~.form-check-label, -.shiny-input-container .radio input:disabled~span, -.shiny-input-container .radio-inline input[disabled]~.form-check-label, -.shiny-input-container .radio-inline input[disabled]~span, -.shiny-input-container .radio-inline input:disabled~.form-check-label, -.shiny-input-container .radio-inline input:disabled~span { - cursor: default; - opacity: .5 -} - -.form-check-label, -.shiny-input-container .checkbox label, -.shiny-input-container .checkbox-inline label, -.shiny-input-container .radio label, -.shiny-input-container .radio-inline label { - cursor: pointer -} - -.form-switch { - padding-left: 2.5em -} - -.form-switch .form-check-input { - width: 2em; - margin-left: -2.5em; - background-image: url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='-4 -4 8 8'%3e%3ccircle r='3' fill='rgba%280,0,0,0.25%29'/%3e%3c/svg%3e"); - background-position: left center; - border-radius: 2em -} - -.form-switch .form-check-input:focus { - background-image: url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='-4 -4 8 8'%3e%3ccircle r='3' fill='%2399aec4'/%3e%3c/svg%3e") -} - -.form-switch .form-check-input:checked { - background-position: right center; - background-image: url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='-4 -4 8 8'%3e%3ccircle r='3' fill='%23fff'/%3e%3c/svg%3e"), var(--bs-gradient) -} - -.form-switch.form-check-reverse { - padding-right: 2.5em; - padding-left: 0 -} - -.form-switch.form-check-reverse .form-check-input { - margin-right: -2.5em; - margin-left: 0 -} - -.form-check-inline { - display: inline-block; - margin-right: 1rem -} - -.btn-check { - position: absolute; - clip: rect(0, 0, 0, 0); - pointer-events: none -} - -.btn-check[disabled]+.btn, -.btn-check:disabled+.btn { - pointer-events: none; - filter: none; - opacity: .65 -} - -.form-range { - width: 100%; - height: 1.5rem; - padding: 0; - background-color: transparent; - appearance: none; - -webkit-appearance: none; - -moz-appearance: none; - -ms-appearance: none; - -o-appearance: none -} - -.form-range:focus { - outline: 0 -} - -.form-range:focus::-webkit-slider-thumb { - box-shadow: 0 0 0 1px #fff, 0 0 0 .25rem rgba(50, 93, 136, 0.25) -} - -.form-range:focus::-moz-range-thumb { - box-shadow: 0 0 0 1px #fff, 0 0 0 .25rem rgba(50, 93, 136, 0.25) -} - -.form-range::-moz-focus-outer { - border: 0 -} - -.form-range::-webkit-slider-thumb { - width: 1rem; - height: 1rem; - margin-top: -.25rem; - background-color: #325d88; - background-image: var(--bs-gradient); - border: 0; - border-radius: 1rem; - appearance: none; - -webkit-appearance: none; - -moz-appearance: none; - -ms-appearance: none; - -o-appearance: none -} - -.form-range::-webkit-slider-thumb:active { - background-color: #c2cedb; - background-image: var(--bs-gradient) -} - -.form-range::-webkit-slider-runnable-track { - width: 100%; - height: .5rem; - color: transparent; - cursor: pointer; - background-color: #dfd7ca; - border-color: transparent; - border-radius: 1rem -} - -.form-range::-moz-range-thumb { - width: 1rem; - height: 1rem; - background-color: #325d88; - background-image: var(--bs-gradient); - border: 0; - border-radius: 1rem; - appearance: none; - -webkit-appearance: none; - -moz-appearance: none; - -ms-appearance: none; - -o-appearance: none -} - -.form-range::-moz-range-thumb:active { - background-color: #c2cedb; - background-image: var(--bs-gradient) -} - -.form-range::-moz-range-track { - width: 100%; - height: .5rem; - color: transparent; - cursor: pointer; - background-color: #dfd7ca; - border-color: transparent; - border-radius: 1rem -} - -.form-range:disabled { - pointer-events: none -} - -.form-range:disabled::-webkit-slider-thumb { - background-color: #98978b -} - -.form-range:disabled::-moz-range-thumb { - background-color: #98978b -} - -.form-floating { - position: relative -} - -.form-floating>.form-control, -.form-floating>.form-control-plaintext, -.form-floating>.form-select { - height: calc(3.5rem + 2px); - line-height: 1.25 -} - -.form-floating>label { - position: absolute; - top: 0; - left: 0; - width: 100%; - height: 100%; - padding: 1rem .75rem; - overflow: hidden; - text-align: start; - text-overflow: ellipsis; - white-space: nowrap; - pointer-events: none; - border: 1px solid transparent; - transform-origin: 0 0 -} - -.form-floating>.form-control, -.form-floating>.form-control-plaintext { - padding: 1rem .75rem -} - -.form-floating>.form-control::placeholder, -.form-floating>.form-control-plaintext::placeholder { - color: transparent -} - -.form-floating>.form-control:focus, -.form-floating>.form-control:not(:placeholder-shown), -.form-floating>.form-control-plaintext:focus, -.form-floating>.form-control-plaintext:not(:placeholder-shown) { - padding-top: 1.625rem; - padding-bottom: .625rem -} - -.form-floating>.form-control:-webkit-autofill, -.form-floating>.form-control-plaintext:-webkit-autofill { - padding-top: 1.625rem; - padding-bottom: .625rem -} - -.form-floating>.form-select { - padding-top: 1.625rem; - padding-bottom: .625rem -} - -.form-floating>.form-control:focus~label, -.form-floating>.form-control:not(:placeholder-shown)~label, -.form-floating>.form-control-plaintext~label, -.form-floating>.form-select~label { - opacity: .65; - transform: scale(0.85) translateY(-0.5rem) translateX(0.15rem) -} - -.form-floating>.form-control:-webkit-autofill~label { - opacity: .65; - transform: scale(0.85) translateY(-0.5rem) translateX(0.15rem) -} - -.form-floating>.form-control-plaintext~label { - border-width: 1px 0 -} - -.input-group { - position: relative; - display: flex; - display: -webkit-flex; - flex-wrap: wrap; - -webkit-flex-wrap: wrap; - align-items: stretch; - -webkit-align-items: stretch; - width: 100% -} - -.input-group>.form-control, -.input-group>.form-select, -.input-group>.form-floating { - position: relative; - flex: 1 1 auto; - -webkit-flex: 1 1 auto; - width: 1%; - min-width: 0 -} - -.input-group>.form-control:focus, -.input-group>.form-select:focus, -.input-group>.form-floating:focus-within { - z-index: 5 -} - -.input-group .btn { - position: relative; - z-index: 2 -} - -.input-group .btn:focus { - z-index: 5 -} - -.input-group-text { - display: flex; - display: -webkit-flex; - align-items: center; - -webkit-align-items: center; - padding: .375rem .75rem; - font-size: 1rem; - font-weight: 400; - line-height: 1.5; - color: #3e3f3a; - text-align: center; - white-space: nowrap; - background-color: #f8f5f0; - border: 1px solid #ced4da; - border-radius: .375rem -} - -.input-group-lg>.form-control, -.input-group-lg>.form-select, -.input-group-lg>.input-group-text, -.input-group-lg>.btn { - padding: .5rem 1rem; - font-size: 1.25rem; - border-radius: .5rem -} - -.input-group-sm>.form-control, -.input-group-sm>.form-select, -.input-group-sm>.input-group-text, -.input-group-sm>.btn { - padding: .25rem .5rem; - font-size: .875rem; - border-radius: .25rem -} - -.input-group-lg>.form-select, -.input-group-sm>.form-select { - padding-right: 3rem -} - -.input-group:not(.has-validation)>:not(:last-child):not(.dropdown-toggle):not(.dropdown-menu):not(.form-floating), -.input-group:not(.has-validation)>.dropdown-toggle:nth-last-child(n + 3), -.input-group:not(.has-validation)>.form-floating:not(:last-child)>.form-control, -.input-group:not(.has-validation)>.form-floating:not(:last-child)>.form-select { - border-top-right-radius: 0; - border-bottom-right-radius: 0 -} - -.input-group.has-validation>:nth-last-child(n + 3):not(.dropdown-toggle):not(.dropdown-menu):not(.form-floating), -.input-group.has-validation>.dropdown-toggle:nth-last-child(n + 4), -.input-group.has-validation>.form-floating:nth-last-child(n + 3)>.form-control, -.input-group.has-validation>.form-floating:nth-last-child(n + 3)>.form-select { - border-top-right-radius: 0; - border-bottom-right-radius: 0 -} - -.input-group>:not(:first-child):not(.dropdown-menu):not(.valid-tooltip):not(.valid-feedback):not(.invalid-tooltip):not(.invalid-feedback) { - margin-left: -1px; - border-top-left-radius: 0; - border-bottom-left-radius: 0 -} - -.input-group>.form-floating:not(:first-child)>.form-control, -.input-group>.form-floating:not(:first-child)>.form-select { - border-top-left-radius: 0; - border-bottom-left-radius: 0 -} - -.valid-feedback { - display: none; - width: 100%; - margin-top: .25rem; - font-size: .875em; - color: #93c54b -} - -.valid-tooltip { - position: absolute; - top: 100%; - z-index: 5; - display: none; - max-width: 100%; - padding: .25rem .5rem; - margin-top: .1rem; - font-size: .875rem; - color: #fff; - background-color: rgba(147, 197, 75, 0.9); - border-radius: .375rem -} - -.was-validated :valid~.valid-feedback, -.was-validated :valid~.valid-tooltip, -.is-valid~.valid-feedback, -.is-valid~.valid-tooltip { - display: block -} - -.was-validated .form-control:valid, -.form-control.is-valid { - border-color: #93c54b; - padding-right: calc(1.5em + .75rem); - background-image: url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 8 8'%3e%3cpath fill='%2393c54b' d='M2.3 6.73.6 4.53c-.4-1.04.46-1.4 1.1-.8l1.1 1.4 3.4-3.8c.6-.63 1.6-.27 1.2.7l-4 4.6c-.43.5-.8.4-1.1.1z'/%3e%3c/svg%3e"); - background-repeat: no-repeat; - background-position: right calc(.375em + .1875rem) center; - background-size: calc(.75em + .375rem) calc(.75em + .375rem) -} - -.was-validated .form-control:valid:focus, -.form-control.is-valid:focus { - border-color: #93c54b; - box-shadow: 0 0 0 .25rem rgba(147, 197, 75, 0.25) -} - -.was-validated textarea.form-control:valid, -textarea.form-control.is-valid { - padding-right: calc(1.5em + .75rem); - background-position: top calc(.375em + .1875rem) right calc(.375em + .1875rem) -} - -.was-validated .form-select:valid, -.form-select.is-valid { - border-color: #93c54b -} - -.was-validated .form-select:valid:not([multiple]):not([size]), -.was-validated .form-select:valid:not([multiple])[size="1"], -.form-select.is-valid:not([multiple]):not([size]), -.form-select.is-valid:not([multiple])[size="1"] { - padding-right: 4.125rem; - background-image: url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 16 16'%3e%3cpath fill='none' stroke='%233e3f3a' stroke-linecap='round' stroke-linejoin='round' stroke-width='2' d='m2 5 6 6 6-6'/%3e%3c/svg%3e"), url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 8 8'%3e%3cpath fill='%2393c54b' d='M2.3 6.73.6 4.53c-.4-1.04.46-1.4 1.1-.8l1.1 1.4 3.4-3.8c.6-.63 1.6-.27 1.2.7l-4 4.6c-.43.5-.8.4-1.1.1z'/%3e%3c/svg%3e"); - background-position: right .75rem center, center right 2.25rem; - background-size: 16px 12px, calc(.75em + .375rem) calc(.75em + .375rem) -} - -.was-validated .form-select:valid:focus, -.form-select.is-valid:focus { - border-color: #93c54b; - box-shadow: 0 0 0 .25rem rgba(147, 197, 75, 0.25) -} - -.was-validated .form-control-color:valid, -.form-control-color.is-valid { - width: calc(3rem + calc(1.5em + .75rem)) -} - -.was-validated .form-check-input:valid, -.form-check-input.is-valid { - border-color: #93c54b -} - -.was-validated .form-check-input:valid:checked, -.form-check-input.is-valid:checked { - background-color: #93c54b -} - -.was-validated .form-check-input:valid:focus, -.form-check-input.is-valid:focus { - box-shadow: 0 0 0 .25rem rgba(147, 197, 75, 0.25) -} - -.was-validated .form-check-input:valid~.form-check-label, -.form-check-input.is-valid~.form-check-label { - color: #93c54b -} - -.form-check-inline .form-check-input~.valid-feedback { - margin-left: .5em -} - -.was-validated .input-group>.form-control:not(:focus):valid, -.input-group>.form-control:not(:focus).is-valid, -.was-validated .input-group>.form-select:not(:focus):valid, -.input-group>.form-select:not(:focus).is-valid, -.was-validated .input-group>.form-floating:not(:focus-within):valid, -.input-group>.form-floating:not(:focus-within).is-valid { - z-index: 3 -} - -.invalid-feedback { - display: none; - width: 100%; - margin-top: .25rem; - font-size: .875em; - color: #d9534f -} - -.invalid-tooltip { - position: absolute; - top: 100%; - z-index: 5; - display: none; - max-width: 100%; - padding: .25rem .5rem; - margin-top: .1rem; - font-size: .875rem; - color: #fff; - background-color: rgba(217, 83, 79, 0.9); - border-radius: .375rem -} - -.was-validated :invalid~.invalid-feedback, -.was-validated :invalid~.invalid-tooltip, -.is-invalid~.invalid-feedback, -.is-invalid~.invalid-tooltip { - display: block -} - -.was-validated .form-control:invalid, -.form-control.is-invalid { - border-color: #d9534f; - padding-right: calc(1.5em + .75rem); - background-image: url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 12 12' width='12' height='12' fill='none' stroke='%23d9534f'%3e%3ccircle cx='6' cy='6' r='4.5'/%3e%3cpath stroke-linejoin='round' d='M5.8 3.6h.4L6 6.5z'/%3e%3ccircle cx='6' cy='8.2' r='.6' fill='%23d9534f' stroke='none'/%3e%3c/svg%3e"); - background-repeat: no-repeat; - background-position: right calc(.375em + .1875rem) center; - background-size: calc(.75em + .375rem) calc(.75em + .375rem) -} - -.was-validated .form-control:invalid:focus, -.form-control.is-invalid:focus { - border-color: #d9534f; - box-shadow: 0 0 0 .25rem rgba(217, 83, 79, 0.25) -} - -.was-validated textarea.form-control:invalid, -textarea.form-control.is-invalid { - padding-right: calc(1.5em + .75rem); - background-position: top calc(.375em + .1875rem) right calc(.375em + .1875rem) -} - -.was-validated .form-select:invalid, -.form-select.is-invalid { - border-color: #d9534f -} - -.was-validated .form-select:invalid:not([multiple]):not([size]), -.was-validated .form-select:invalid:not([multiple])[size="1"], -.form-select.is-invalid:not([multiple]):not([size]), -.form-select.is-invalid:not([multiple])[size="1"] { - padding-right: 4.125rem; - background-image: url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 16 16'%3e%3cpath fill='none' stroke='%233e3f3a' stroke-linecap='round' stroke-linejoin='round' stroke-width='2' d='m2 5 6 6 6-6'/%3e%3c/svg%3e"), url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 12 12' width='12' height='12' fill='none' stroke='%23d9534f'%3e%3ccircle cx='6' cy='6' r='4.5'/%3e%3cpath stroke-linejoin='round' d='M5.8 3.6h.4L6 6.5z'/%3e%3ccircle cx='6' cy='8.2' r='.6' fill='%23d9534f' stroke='none'/%3e%3c/svg%3e"); - background-position: right .75rem center, center right 2.25rem; - background-size: 16px 12px, calc(.75em + .375rem) calc(.75em + .375rem) -} - -.was-validated .form-select:invalid:focus, -.form-select.is-invalid:focus { - border-color: #d9534f; - box-shadow: 0 0 0 .25rem rgba(217, 83, 79, 0.25) -} - -.was-validated .form-control-color:invalid, -.form-control-color.is-invalid { - width: calc(3rem + calc(1.5em + .75rem)) -} - -.was-validated .form-check-input:invalid, -.form-check-input.is-invalid { - border-color: #d9534f -} - -.was-validated .form-check-input:invalid:checked, -.form-check-input.is-invalid:checked { - background-color: #d9534f -} - -.was-validated .form-check-input:invalid:focus, -.form-check-input.is-invalid:focus { - box-shadow: 0 0 0 .25rem rgba(217, 83, 79, 0.25) -} - -.was-validated .form-check-input:invalid~.form-check-label, -.form-check-input.is-invalid~.form-check-label { - color: #d9534f -} - -.form-check-inline .form-check-input~.invalid-feedback { - margin-left: .5em -} - -.was-validated .input-group>.form-control:not(:focus):invalid, -.input-group>.form-control:not(:focus).is-invalid, -.was-validated .input-group>.form-select:not(:focus):invalid, -.input-group>.form-select:not(:focus).is-invalid, -.was-validated .input-group>.form-floating:not(:focus-within):invalid, -.input-group>.form-floating:not(:focus-within).is-invalid { - z-index: 4 -} - -.btn { - --bs-btn-padding-x: .75rem; - --bs-btn-padding-y: .375rem; - --bs-btn-font-family: ; - --bs-btn-font-size: 1rem; - --bs-btn-font-weight: 400; - --bs-btn-line-height: 1.5; - --bs-btn-color: #3e3f3a; - --bs-btn-bg: transparent; - --bs-btn-border-width: 1px; - --bs-btn-border-color: transparent; - --bs-btn-border-radius: .375rem; - --bs-btn-hover-border-color: transparent; - --bs-btn-box-shadow: inset 0 1px 0 rgba(255, 255, 255, 0.15), 0 1px 1px rgba(0, 0, 0, 0.075); - --bs-btn-disabled-opacity: .65; - --bs-btn-focus-box-shadow: 0 0 0 .25rem rgba(var(--bs-btn-focus-shadow-rgb), .5); - display: inline-block; - padding: var(--bs-btn-padding-y) var(--bs-btn-padding-x); - font-family: var(--bs-btn-font-family); - font-size: var(--bs-btn-font-size); - font-weight: var(--bs-btn-font-weight); - line-height: var(--bs-btn-line-height); - color: var(--bs-btn-color); - text-align: center; - text-decoration: none; - -webkit-text-decoration: none; - -moz-text-decoration: none; - -ms-text-decoration: none; - -o-text-decoration: none; - vertical-align: middle; - cursor: pointer; - user-select: none; - -webkit-user-select: none; - -moz-user-select: none; - -ms-user-select: none; - -o-user-select: none; - border: var(--bs-btn-border-width) solid var(--bs-btn-border-color); - border-radius: var(--bs-btn-border-radius); - background-color: var(--bs-btn-bg); - background-image: var(--bs-gradient) -} - -.btn:hover { - color: var(--bs-btn-hover-color); - background-color: var(--bs-btn-hover-bg); - border-color: var(--bs-btn-hover-border-color) -} - -.btn-check+.btn:hover { - color: var(--bs-btn-color); - background-color: var(--bs-btn-bg); - border-color: var(--bs-btn-border-color) -} - -.btn:focus-visible { - color: var(--bs-btn-hover-color); - background-color: var(--bs-btn-hover-bg); - background-image: var(--bs-gradient); - border-color: var(--bs-btn-hover-border-color); - outline: 0; - box-shadow: var(--bs-btn-focus-box-shadow) -} - -.btn-check:focus-visible+.btn { - border-color: var(--bs-btn-hover-border-color); - outline: 0; - box-shadow: var(--bs-btn-focus-box-shadow) -} - -.btn-check:checked+.btn, -:not(.btn-check)+.btn:active, -.btn:first-child:active, -.btn.active, -.btn.show, -.btn.in { - color: var(--bs-btn-active-color); - background-color: var(--bs-btn-active-bg); - background-image: none; - border-color: var(--bs-btn-active-border-color) -} - -.btn-check:checked+.btn:focus-visible, -:not(.btn-check)+.btn:active:focus-visible, -.btn:first-child:active:focus-visible, -.btn.active:focus-visible, -.btn.show:focus-visible, -.btn.in:focus-visible { - box-shadow: var(--bs-btn-focus-box-shadow) -} - -.btn:disabled, -.btn.disabled, -fieldset:disabled .btn { - color: var(--bs-btn-disabled-color); - pointer-events: none; - background-color: var(--bs-btn-disabled-bg); - background-image: none; - border-color: var(--bs-btn-disabled-border-color); - opacity: var(--bs-btn-disabled-opacity) -} - -.btn-default { - --bs-btn-color: #fff; - --bs-btn-bg: #8e8c84; - --bs-btn-border-color: #8e8c84; - --bs-btn-hover-color: #fff; - --bs-btn-hover-bg: #797770; - --bs-btn-hover-border-color: #72706a; - --bs-btn-focus-shadow-rgb: 159, 157, 150; - --bs-btn-active-color: #fff; - --bs-btn-active-bg: #72706a; - --bs-btn-active-border-color: #6b6963; - --bs-btn-active-shadow: inset 0 3px 5px rgba(0, 0, 0, 0.125); - --bs-btn-disabled-color: #fff; - --bs-btn-disabled-bg: #8e8c84; - --bs-btn-disabled-border-color: #8e8c84 -} - -.btn-primary { - --bs-btn-color: #fff; - --bs-btn-bg: #325d88; - --bs-btn-border-color: #325d88; - --bs-btn-hover-color: #fff; - --bs-btn-hover-bg: #2b4f74; - --bs-btn-hover-border-color: #284a6d; - --bs-btn-focus-shadow-rgb: 81, 117, 154; - --bs-btn-active-color: #fff; - --bs-btn-active-bg: #284a6d; - --bs-btn-active-border-color: #264666; - --bs-btn-active-shadow: inset 0 3px 5px rgba(0, 0, 0, 0.125); - --bs-btn-disabled-color: #fff; - --bs-btn-disabled-bg: #325d88; - --bs-btn-disabled-border-color: #325d88 -} - -.btn-secondary, -.btn-default:not(.btn-primary):not(.btn-info):not(.btn-success):not(.btn-warning):not(.btn-danger):not(.btn-dark):not(.btn-outline-primary):not(.btn-outline-info):not(.btn-outline-success):not(.btn-outline-warning):not(.btn-outline-danger):not(.btn-outline-dark) { - --bs-btn-color: #fff; - --bs-btn-bg: #8e8c84; - --bs-btn-border-color: #8e8c84; - --bs-btn-hover-color: #fff; - --bs-btn-hover-bg: #797770; - --bs-btn-hover-border-color: #72706a; - --bs-btn-focus-shadow-rgb: 159, 157, 150; - --bs-btn-active-color: #fff; - --bs-btn-active-bg: #72706a; - --bs-btn-active-border-color: #6b6963; - --bs-btn-active-shadow: inset 0 3px 5px rgba(0, 0, 0, 0.125); - --bs-btn-disabled-color: #fff; - --bs-btn-disabled-bg: #8e8c84; - --bs-btn-disabled-border-color: #8e8c84 -} - -.btn-success { - --bs-btn-color: #fff; - --bs-btn-bg: #93c54b; - --bs-btn-border-color: #93c54b; - --bs-btn-hover-color: #fff; - --bs-btn-hover-bg: #7da740; - --bs-btn-hover-border-color: #769e3c; - --bs-btn-focus-shadow-rgb: 163, 206, 102; - --bs-btn-active-color: #fff; - --bs-btn-active-bg: #769e3c; - --bs-btn-active-border-color: #6e9438; - --bs-btn-active-shadow: inset 0 3px 5px rgba(0, 0, 0, 0.125); - --bs-btn-disabled-color: #fff; - --bs-btn-disabled-bg: #93c54b; - --bs-btn-disabled-border-color: #93c54b -} - -.btn-info { - --bs-btn-color: #fff; - --bs-btn-bg: #29abe0; - --bs-btn-border-color: #29abe0; - --bs-btn-hover-color: #fff; - --bs-btn-hover-bg: #2391be; - --bs-btn-hover-border-color: #2189b3; - --bs-btn-focus-shadow-rgb: 73, 184, 229; - --bs-btn-active-color: #fff; - --bs-btn-active-bg: #2189b3; - --bs-btn-active-border-color: #1f80a8; - --bs-btn-active-shadow: inset 0 3px 5px rgba(0, 0, 0, 0.125); - --bs-btn-disabled-color: #fff; - --bs-btn-disabled-bg: #29abe0; - --bs-btn-disabled-border-color: #29abe0 -} - -.btn-warning { - --bs-btn-color: #fff; - --bs-btn-bg: #f47c3c; - --bs-btn-border-color: #f47c3c; - --bs-btn-hover-color: #fff; - --bs-btn-hover-bg: #cf6933; - --bs-btn-hover-border-color: #c36330; - --bs-btn-focus-shadow-rgb: 246, 144, 89; - --bs-btn-active-color: #fff; - --bs-btn-active-bg: #c36330; - --bs-btn-active-border-color: #b75d2d; - --bs-btn-active-shadow: inset 0 3px 5px rgba(0, 0, 0, 0.125); - --bs-btn-disabled-color: #fff; - --bs-btn-disabled-bg: #f47c3c; - --bs-btn-disabled-border-color: #f47c3c -} - -.btn-danger { - --bs-btn-color: #fff; - --bs-btn-bg: #d9534f; - --bs-btn-border-color: #d9534f; - --bs-btn-hover-color: #fff; - --bs-btn-hover-bg: #b84743; - --bs-btn-hover-border-color: #ae423f; - --bs-btn-focus-shadow-rgb: 223, 109, 105; - --bs-btn-active-color: #fff; - --bs-btn-active-bg: #ae423f; - --bs-btn-active-border-color: #a33e3b; - --bs-btn-active-shadow: inset 0 3px 5px rgba(0, 0, 0, 0.125); - --bs-btn-disabled-color: #fff; - --bs-btn-disabled-bg: #d9534f; - --bs-btn-disabled-border-color: #d9534f -} - -.btn-light { - --bs-btn-color: #000; - --bs-btn-bg: #f8f5f0; - --bs-btn-border-color: #f8f5f0; - --bs-btn-hover-color: #000; - --bs-btn-hover-bg: #d3d0cc; - --bs-btn-hover-border-color: #c6c4c0; - --bs-btn-focus-shadow-rgb: 211, 208, 204; - --bs-btn-active-color: #000; - --bs-btn-active-bg: #c6c4c0; - --bs-btn-active-border-color: #bab8b4; - --bs-btn-active-shadow: inset 0 3px 5px rgba(0, 0, 0, 0.125); - --bs-btn-disabled-color: #000; - --bs-btn-disabled-bg: #f8f5f0; - --bs-btn-disabled-border-color: #f8f5f0 -} - -.btn-dark { - --bs-btn-color: #fff; - --bs-btn-bg: #3e3f3a; - --bs-btn-border-color: #3e3f3a; - --bs-btn-hover-color: #fff; - --bs-btn-hover-bg: #5b5c58; - --bs-btn-hover-border-color: #51524e; - --bs-btn-focus-shadow-rgb: 91, 92, 88; - --bs-btn-active-color: #fff; - --bs-btn-active-bg: #656561; - --bs-btn-active-border-color: #51524e; - --bs-btn-active-shadow: inset 0 3px 5px rgba(0, 0, 0, 0.125); - --bs-btn-disabled-color: #fff; - --bs-btn-disabled-bg: #3e3f3a; - --bs-btn-disabled-border-color: #3e3f3a -} - -.btn-outline-default { - --bs-btn-color: #8e8c84; - --bs-btn-border-color: #8e8c84; - --bs-btn-hover-color: #fff; - --bs-btn-hover-bg: #8e8c84; - --bs-btn-hover-border-color: #8e8c84; - --bs-btn-focus-shadow-rgb: 142, 140, 132; - --bs-btn-active-color: #fff; - --bs-btn-active-bg: #8e8c84; - --bs-btn-active-border-color: #8e8c84; - --bs-btn-active-shadow: inset 0 3px 5px rgba(0, 0, 0, 0.125); - --bs-btn-disabled-color: #8e8c84; - --bs-btn-disabled-bg: transparent; - --bs-btn-disabled-border-color: #8e8c84; - --bs-btn-bg: transparent; - --bs-gradient: none -} - -.btn-outline-primary { - --bs-btn-color: #325d88; - --bs-btn-border-color: #325d88; - --bs-btn-hover-color: #fff; - --bs-btn-hover-bg: #325d88; - --bs-btn-hover-border-color: #325d88; - --bs-btn-focus-shadow-rgb: 50, 93, 136; - --bs-btn-active-color: #fff; - --bs-btn-active-bg: #325d88; - --bs-btn-active-border-color: #325d88; - --bs-btn-active-shadow: inset 0 3px 5px rgba(0, 0, 0, 0.125); - --bs-btn-disabled-color: #325d88; - --bs-btn-disabled-bg: transparent; - --bs-btn-disabled-border-color: #325d88; - --bs-btn-bg: transparent; - --bs-gradient: none -} - -.btn-outline-secondary { - --bs-btn-color: #8e8c84; - --bs-btn-border-color: #8e8c84; - --bs-btn-hover-color: #fff; - --bs-btn-hover-bg: #8e8c84; - --bs-btn-hover-border-color: #8e8c84; - --bs-btn-focus-shadow-rgb: 142, 140, 132; - --bs-btn-active-color: #fff; - --bs-btn-active-bg: #8e8c84; - --bs-btn-active-border-color: #8e8c84; - --bs-btn-active-shadow: inset 0 3px 5px rgba(0, 0, 0, 0.125); - --bs-btn-disabled-color: #8e8c84; - --bs-btn-disabled-bg: transparent; - --bs-btn-disabled-border-color: #8e8c84; - --bs-btn-bg: transparent; - --bs-gradient: none -} - -.btn-outline-success { - --bs-btn-color: #93c54b; - --bs-btn-border-color: #93c54b; - --bs-btn-hover-color: #fff; - --bs-btn-hover-bg: #93c54b; - --bs-btn-hover-border-color: #93c54b; - --bs-btn-focus-shadow-rgb: 147, 197, 75; - --bs-btn-active-color: #fff; - --bs-btn-active-bg: #93c54b; - --bs-btn-active-border-color: #93c54b; - --bs-btn-active-shadow: inset 0 3px 5px rgba(0, 0, 0, 0.125); - --bs-btn-disabled-color: #93c54b; - --bs-btn-disabled-bg: transparent; - --bs-btn-disabled-border-color: #93c54b; - --bs-btn-bg: transparent; - --bs-gradient: none -} - -.btn-outline-info { - --bs-btn-color: #29abe0; - --bs-btn-border-color: #29abe0; - --bs-btn-hover-color: #fff; - --bs-btn-hover-bg: #29abe0; - --bs-btn-hover-border-color: #29abe0; - --bs-btn-focus-shadow-rgb: 41, 171, 224; - --bs-btn-active-color: #fff; - --bs-btn-active-bg: #29abe0; - --bs-btn-active-border-color: #29abe0; - --bs-btn-active-shadow: inset 0 3px 5px rgba(0, 0, 0, 0.125); - --bs-btn-disabled-color: #29abe0; - --bs-btn-disabled-bg: transparent; - --bs-btn-disabled-border-color: #29abe0; - --bs-btn-bg: transparent; - --bs-gradient: none -} - -.btn-outline-warning { - --bs-btn-color: #f47c3c; - --bs-btn-border-color: #f47c3c; - --bs-btn-hover-color: #fff; - --bs-btn-hover-bg: #f47c3c; - --bs-btn-hover-border-color: #f47c3c; - --bs-btn-focus-shadow-rgb: 244, 124, 60; - --bs-btn-active-color: #fff; - --bs-btn-active-bg: #f47c3c; - --bs-btn-active-border-color: #f47c3c; - --bs-btn-active-shadow: inset 0 3px 5px rgba(0, 0, 0, 0.125); - --bs-btn-disabled-color: #f47c3c; - --bs-btn-disabled-bg: transparent; - --bs-btn-disabled-border-color: #f47c3c; - --bs-btn-bg: transparent; - --bs-gradient: none -} - -.btn-outline-danger { - --bs-btn-color: #d9534f; - --bs-btn-border-color: #d9534f; - --bs-btn-hover-color: #fff; - --bs-btn-hover-bg: #d9534f; - --bs-btn-hover-border-color: #d9534f; - --bs-btn-focus-shadow-rgb: 217, 83, 79; - --bs-btn-active-color: #fff; - --bs-btn-active-bg: #d9534f; - --bs-btn-active-border-color: #d9534f; - --bs-btn-active-shadow: inset 0 3px 5px rgba(0, 0, 0, 0.125); - --bs-btn-disabled-color: #d9534f; - --bs-btn-disabled-bg: transparent; - --bs-btn-disabled-border-color: #d9534f; - --bs-btn-bg: transparent; - --bs-gradient: none -} - -.btn-outline-light { - --bs-btn-color: #f8f5f0; - --bs-btn-border-color: #f8f5f0; - --bs-btn-hover-color: #000; - --bs-btn-hover-bg: #f8f5f0; - --bs-btn-hover-border-color: #f8f5f0; - --bs-btn-focus-shadow-rgb: 248, 245, 240; - --bs-btn-active-color: #000; - --bs-btn-active-bg: #f8f5f0; - --bs-btn-active-border-color: #f8f5f0; - --bs-btn-active-shadow: inset 0 3px 5px rgba(0, 0, 0, 0.125); - --bs-btn-disabled-color: #f8f5f0; - --bs-btn-disabled-bg: transparent; - --bs-btn-disabled-border-color: #f8f5f0; - --bs-btn-bg: transparent; - --bs-gradient: none -} - -.btn-outline-dark { - --bs-btn-color: #3e3f3a; - --bs-btn-border-color: #3e3f3a; - --bs-btn-hover-color: #fff; - --bs-btn-hover-bg: #3e3f3a; - --bs-btn-hover-border-color: #3e3f3a; - --bs-btn-focus-shadow-rgb: 62, 63, 58; - --bs-btn-active-color: #fff; - --bs-btn-active-bg: #3e3f3a; - --bs-btn-active-border-color: #3e3f3a; - --bs-btn-active-shadow: inset 0 3px 5px rgba(0, 0, 0, 0.125); - --bs-btn-disabled-color: #3e3f3a; - --bs-btn-disabled-bg: transparent; - --bs-btn-disabled-border-color: #3e3f3a; - --bs-btn-bg: transparent; - --bs-gradient: none -} - -.btn-link { - --bs-btn-font-weight: 400; - --bs-btn-color: var(--bs-link-color); - --bs-btn-bg: transparent; - --bs-btn-border-color: transparent; - --bs-btn-hover-color: var(--bs-link-hover-color); - --bs-btn-hover-border-color: transparent; - --bs-btn-active-color: var(--bs-link-hover-color); - --bs-btn-active-border-color: transparent; - --bs-btn-disabled-color: #8e8c84; - --bs-btn-disabled-border-color: transparent; - --bs-btn-box-shadow: none; - --bs-btn-focus-shadow-rgb: 81, 117, 154; - text-decoration: underline; - -webkit-text-decoration: underline; - -moz-text-decoration: underline; - -ms-text-decoration: underline; - -o-text-decoration: underline; - background-image: none -} - -.btn-link:focus-visible { - color: var(--bs-btn-color) -} - -.btn-link:hover { - color: var(--bs-btn-hover-color) -} - -.btn-lg, -.btn-group-lg>.btn { - --bs-btn-padding-y: .5rem; - --bs-btn-padding-x: 1rem; - --bs-btn-font-size: 1.25rem; - --bs-btn-border-radius: .5rem -} - -.btn-sm, -.btn-group-sm>.btn { - --bs-btn-padding-y: .25rem; - --bs-btn-padding-x: .5rem; - --bs-btn-font-size: .875rem; - --bs-btn-border-radius: .25rem -} - -.fade:not(.show):not(.in) { - opacity: 0 -} - -.collapse:not(.show):not(.in) { - display: none -} - -.collapsing { - height: 0; - overflow: hidden -} - -.collapsing.collapse-horizontal { - width: 0; - height: auto -} - -.dropup, -.dropend, -.dropdown, -.dropstart, -.dropup-center, -.dropdown-center { - position: relative -} - -.dropdown-toggle { - white-space: nowrap -} - -.dropdown-toggle::after { - display: inline-block; - margin-left: .255em; - vertical-align: .255em; - content: ""; - border-top: .3em solid; - border-right: .3em solid transparent; - border-bottom: 0; - border-left: .3em solid transparent -} - -.dropdown-toggle:empty::after { - margin-left: 0 -} - -.dropdown-menu { - --bs-dropdown-zindex: 1000; - --bs-dropdown-min-width: 10rem; - --bs-dropdown-padding-x: 0; - --bs-dropdown-padding-y: .5rem; - --bs-dropdown-spacer: .125rem; - --bs-dropdown-font-size: 1rem; - --bs-dropdown-color: #3e3f3a; - --bs-dropdown-bg: #fff; - --bs-dropdown-border-color: var(--bs-border-color-translucent); - --bs-dropdown-border-radius: .375rem; - --bs-dropdown-border-width: 1px; - --bs-dropdown-inner-border-radius: calc(.375rem - 1px); - --bs-dropdown-divider-bg: var(--bs-border-color-translucent); - --bs-dropdown-divider-margin-y: .5rem; - --bs-dropdown-box-shadow: 0 0.5rem 1rem rgba(0, 0, 0, 0.15); - --bs-dropdown-link-color: #8e8c84; - --bs-dropdown-link-hover-color: #8e8c84; - --bs-dropdown-link-hover-bg: #f8f5f0; - --bs-dropdown-link-active-color: #8e8c84; - --bs-dropdown-link-active-bg: #f8f5f0; - --bs-dropdown-link-disabled-color: #98978b; - --bs-dropdown-item-padding-x: 1rem; - --bs-dropdown-item-padding-y: .25rem; - --bs-dropdown-header-color: #8e8c84; - --bs-dropdown-header-padding-x: 1rem; - --bs-dropdown-header-padding-y: .5rem; - position: absolute; - z-index: var(--bs-dropdown-zindex); - display: none; - min-width: var(--bs-dropdown-min-width); - padding: var(--bs-dropdown-padding-y) var(--bs-dropdown-padding-x); - margin: 0; - font-size: var(--bs-dropdown-font-size); - color: var(--bs-dropdown-color); - text-align: left; - list-style: none; - background-color: var(--bs-dropdown-bg); - background-clip: padding-box; - border: var(--bs-dropdown-border-width) solid var(--bs-dropdown-border-color); - border-radius: var(--bs-dropdown-border-radius) -} - -.dropdown-menu[data-bs-popper] { - top: 100%; - left: 0; - margin-top: var(--bs-dropdown-spacer) -} - -.dropdown-menu-start { - --bs-position: start -} - -.dropdown-menu-start[data-bs-popper] { - right: auto; - left: 0 -} - -.dropdown-menu-end { - --bs-position: end -} - -.dropdown-menu-end[data-bs-popper] { - right: 0; - left: auto -} - -@media (min-width: 576px) { - .dropdown-menu-sm-start { - --bs-position: start - } - - .dropdown-menu-sm-start[data-bs-popper] { - right: auto; - left: 0 - } - - .dropdown-menu-sm-end { - --bs-position: end - } - - .dropdown-menu-sm-end[data-bs-popper] { - right: 0; - left: auto - } -} - -@media (min-width: 768px) { - .dropdown-menu-md-start { - --bs-position: start - } - - .dropdown-menu-md-start[data-bs-popper] { - right: auto; - left: 0 - } - - .dropdown-menu-md-end { - --bs-position: end - } - - .dropdown-menu-md-end[data-bs-popper] { - right: 0; - left: auto - } -} - -@media (min-width: 992px) { - .dropdown-menu-lg-start { - --bs-position: start - } - - .dropdown-menu-lg-start[data-bs-popper] { - right: auto; - left: 0 - } - - .dropdown-menu-lg-end { - --bs-position: end - } - - .dropdown-menu-lg-end[data-bs-popper] { - right: 0; - left: auto - } -} - -@media (min-width: 1200px) { - .dropdown-menu-xl-start { - --bs-position: start - } - - .dropdown-menu-xl-start[data-bs-popper] { - right: auto; - left: 0 - } - - .dropdown-menu-xl-end { - --bs-position: end - } - - .dropdown-menu-xl-end[data-bs-popper] { - right: 0; - left: auto - } -} - -@media (min-width: 1400px) { - .dropdown-menu-xxl-start { - --bs-position: start - } - - .dropdown-menu-xxl-start[data-bs-popper] { - right: auto; - left: 0 - } - - .dropdown-menu-xxl-end { - --bs-position: end - } - - .dropdown-menu-xxl-end[data-bs-popper] { - right: 0; - left: auto - } -} - -.dropup .dropdown-menu[data-bs-popper] { - top: auto; - bottom: 100%; - margin-top: 0; - margin-bottom: var(--bs-dropdown-spacer) -} - -.dropup .dropdown-toggle::after { - display: inline-block; - margin-left: .255em; - vertical-align: .255em; - content: ""; - border-top: 0; - border-right: .3em solid transparent; - border-bottom: .3em solid; - border-left: .3em solid transparent -} - -.dropup .dropdown-toggle:empty::after { - margin-left: 0 -} - -.dropend .dropdown-menu[data-bs-popper] { - top: 0; - right: auto; - left: 100%; - margin-top: 0; - margin-left: var(--bs-dropdown-spacer) -} - -.dropend .dropdown-toggle::after { - display: inline-block; - margin-left: .255em; - vertical-align: .255em; - content: ""; - border-top: .3em solid transparent; - border-right: 0; - border-bottom: .3em solid transparent; - border-left: .3em solid -} - -.dropend .dropdown-toggle:empty::after { - margin-left: 0 -} - -.dropend .dropdown-toggle::after { - vertical-align: 0 -} - -.dropstart .dropdown-menu[data-bs-popper] { - top: 0; - right: 100%; - left: auto; - margin-top: 0; - margin-right: var(--bs-dropdown-spacer) -} - -.dropstart .dropdown-toggle::after { - display: inline-block; - margin-left: .255em; - vertical-align: .255em; - content: "" -} - -.dropstart .dropdown-toggle::after { - display: none -} - -.dropstart .dropdown-toggle::before { - display: inline-block; - margin-right: .255em; - vertical-align: .255em; - content: ""; - border-top: .3em solid transparent; - border-right: .3em solid; - border-bottom: .3em solid transparent -} - -.dropstart .dropdown-toggle:empty::after { - margin-left: 0 -} - -.dropstart .dropdown-toggle::before { - vertical-align: 0 -} - -.dropdown-divider, -.dropdown-menu>li.divider { - height: 0; - margin: var(--bs-dropdown-divider-margin-y) 0; - overflow: hidden; - border-top: 1px solid var(--bs-dropdown-divider-bg); - opacity: 1 -} - -.dropdown-item, -.dropdown-menu>li>a { - display: block; - width: 100%; - padding: var(--bs-dropdown-item-padding-y) var(--bs-dropdown-item-padding-x); - clear: both; - font-weight: 400; - color: var(--bs-dropdown-link-color); - text-align: inherit; - text-decoration: none; - -webkit-text-decoration: none; - -moz-text-decoration: none; - -ms-text-decoration: none; - -o-text-decoration: none; - white-space: nowrap; - background-color: transparent; - border: 0 -} - -.dropdown-item:hover, -.dropdown-menu>li>a:hover, -.dropdown-item:focus, -.dropdown-menu>li>a:focus { - color: var(--bs-dropdown-link-hover-color); - background-color: var(--bs-dropdown-link-hover-bg); - background-image: var(--bs-gradient) -} - -.dropdown-item.active, -.dropdown-menu>li>a.active, -.dropdown-item:active, -.dropdown-menu>li>a:active { - color: var(--bs-dropdown-link-active-color); - text-decoration: none; - background-color: var(--bs-dropdown-link-active-bg); - background-image: var(--bs-gradient) -} - -.dropdown-item.disabled, -.dropdown-menu>li>a.disabled, -.dropdown-item:disabled, -.dropdown-menu>li>a:disabled { - color: var(--bs-dropdown-link-disabled-color); - pointer-events: none; - background-color: transparent; - background-image: none -} - -.dropdown-menu.show, -.dropdown-menu.in { - display: block -} - -.dropdown-header { - display: block; - padding: var(--bs-dropdown-header-padding-y) var(--bs-dropdown-header-padding-x); - margin-bottom: 0; - font-size: .875rem; - color: var(--bs-dropdown-header-color); - white-space: nowrap -} - -.dropdown-item-text { - display: block; - padding: var(--bs-dropdown-item-padding-y) var(--bs-dropdown-item-padding-x); - color: var(--bs-dropdown-link-color) -} - -.dropdown-menu-dark { - --bs-dropdown-color: #dfd7ca; - --bs-dropdown-bg: #3e3f3a; - --bs-dropdown-border-color: var(--bs-border-color-translucent); - --bs-dropdown-box-shadow: ; - --bs-dropdown-link-color: #dfd7ca; - --bs-dropdown-link-hover-color: #fff; - --bs-dropdown-divider-bg: var(--bs-border-color-translucent); - --bs-dropdown-link-hover-bg: rgba(255, 255, 255, 0.15); - --bs-dropdown-link-active-color: #8e8c84; - --bs-dropdown-link-active-bg: #f8f5f0; - --bs-dropdown-link-disabled-color: #98978b; - --bs-dropdown-header-color: #98978b -} - -.btn-group, -.btn-group-vertical { - position: relative; - display: inline-flex; - vertical-align: middle -} - -.btn-group>.btn, -.btn-group-vertical>.btn { - position: relative; - flex: 1 1 auto; - -webkit-flex: 1 1 auto -} - -.btn-group>.btn-check:checked+.btn, -.btn-group>.btn-check:focus+.btn, -.btn-group>.btn:hover, -.btn-group>.btn:focus, -.btn-group>.btn:active, -.btn-group>.btn.active, -.btn-group-vertical>.btn-check:checked+.btn, -.btn-group-vertical>.btn-check:focus+.btn, -.btn-group-vertical>.btn:hover, -.btn-group-vertical>.btn:focus, -.btn-group-vertical>.btn:active, -.btn-group-vertical>.btn.active { - z-index: 1 -} - -.btn-toolbar { - display: flex; - display: -webkit-flex; - flex-wrap: wrap; - -webkit-flex-wrap: wrap; - justify-content: flex-start; - -webkit-justify-content: flex-start -} - -.btn-toolbar .input-group { - width: auto -} - -.btn-group { - border-radius: .375rem -} - -.btn-group>:not(.btn-check:first-child)+.btn, -.btn-group>.btn-group:not(:first-child) { - margin-left: -1px -} - -.btn-group>.btn:not(:last-child):not(.dropdown-toggle), -.btn-group>.btn.dropdown-toggle-split:first-child, -.btn-group>.btn-group:not(:last-child)>.btn { - border-top-right-radius: 0; - border-bottom-right-radius: 0 -} - -.btn-group>.btn:nth-child(n + 3), -.btn-group>:not(.btn-check)+.btn, -.btn-group>.btn-group:not(:first-child)>.btn { - border-top-left-radius: 0; - border-bottom-left-radius: 0 -} - -.dropdown-toggle-split { - padding-right: .5625rem; - padding-left: .5625rem -} - -.dropdown-toggle-split::after, -.dropup .dropdown-toggle-split::after, -.dropend .dropdown-toggle-split::after { - margin-left: 0 -} - -.dropstart .dropdown-toggle-split::before { - margin-right: 0 -} - -.btn-sm+.dropdown-toggle-split, -.btn-group-sm>.btn+.dropdown-toggle-split { - padding-right: .375rem; - padding-left: .375rem -} - -.btn-lg+.dropdown-toggle-split, -.btn-group-lg>.btn+.dropdown-toggle-split { - padding-right: .75rem; - padding-left: .75rem -} - -.btn-group-vertical { - flex-direction: column; - -webkit-flex-direction: column; - align-items: flex-start; - -webkit-align-items: flex-start; - justify-content: center; - -webkit-justify-content: center -} - -.btn-group-vertical>.btn, -.btn-group-vertical>.btn-group { - width: 100% -} - -.btn-group-vertical>.btn:not(:first-child), -.btn-group-vertical>.btn-group:not(:first-child) { - margin-top: -1px -} - -.btn-group-vertical>.btn:not(:last-child):not(.dropdown-toggle), -.btn-group-vertical>.btn-group:not(:last-child)>.btn { - border-bottom-right-radius: 0; - border-bottom-left-radius: 0 -} - -.btn-group-vertical>.btn~.btn, -.btn-group-vertical>.btn-group:not(:first-child)>.btn { - border-top-left-radius: 0; - border-top-right-radius: 0 -} - -.nav { - --bs-nav-link-padding-x: .9rem; - --bs-nav-link-padding-y: .5rem; - --bs-nav-link-font-weight: ; - --bs-nav-link-color: var(--bs-link-color); - --bs-nav-link-hover-color: var(--bs-link-hover-color); - --bs-nav-link-disabled-color: #dfd7ca; - display: flex; - display: -webkit-flex; - flex-wrap: wrap; - -webkit-flex-wrap: wrap; - padding-left: 0; - margin-bottom: 0; - list-style: none -} - -.nav-link, -.nav-tabs>li>a, -.nav-pills>li>a, -ul.nav.navbar-nav>li>a { - display: block; - padding: var(--bs-nav-link-padding-y) var(--bs-nav-link-padding-x); - font-size: var(--bs-nav-link-font-size); - font-weight: var(--bs-nav-link-font-weight); - color: var(--bs-nav-link-color); - text-decoration: none; - -webkit-text-decoration: none; - -moz-text-decoration: none; - -ms-text-decoration: none; - -o-text-decoration: none -} - -.nav-link:hover, -.nav-tabs>li>a:hover, -.nav-pills>li>a:hover, -ul.nav.navbar-nav>li>a:hover, -.nav-link:focus, -.nav-tabs>li>a:focus, -.nav-pills>li>a:focus, -ul.nav.navbar-nav>li>a:focus { - color: var(--bs-nav-link-hover-color) -} - -.nav-link.disabled, -.nav-tabs>li>a.disabled, -.nav-pills>li>a.disabled, -ul.nav.navbar-nav>li>a.disabled { - color: var(--bs-nav-link-disabled-color); - pointer-events: none; - cursor: default -} - -.nav-tabs { - --bs-nav-tabs-border-width: 1px; - --bs-nav-tabs-border-color: #dfd7ca; - --bs-nav-tabs-border-radius: .375rem; - --bs-nav-tabs-link-hover-border-color: #dfd7ca; - --bs-nav-tabs-link-active-color: #495057; - --bs-nav-tabs-link-active-bg: #fff; - --bs-nav-tabs-link-active-border-color: #dfd7ca #dfd7ca #fff; - border-bottom: var(--bs-nav-tabs-border-width) solid var(--bs-nav-tabs-border-color) -} - -.nav-tabs .nav-link, -.nav-tabs>li>a, -.nav-tabs .nav-pills>li>a, -.nav-tabs ul.nav.navbar-nav>li>a { - margin-bottom: calc(-1 * var(--bs-nav-tabs-border-width)); - background: none; - border: var(--bs-nav-tabs-border-width) solid transparent; - border-top-left-radius: var(--bs-nav-tabs-border-radius); - border-top-right-radius: var(--bs-nav-tabs-border-radius) -} - -.nav-tabs .nav-link:hover, -.nav-tabs>li>a:hover, -.nav-tabs .nav-pills>li>a:hover, -.nav-tabs ul.nav.navbar-nav>li>a:hover, -.nav-tabs .nav-link:focus, -.nav-tabs>li>a:focus, -.nav-tabs .nav-pills>li>a:focus, -.nav-tabs ul.nav.navbar-nav>li>a:focus { - isolation: isolate; - border-color: var(--bs-nav-tabs-link-hover-border-color) -} - -.nav-tabs .nav-link.disabled, -.nav-tabs>li>a.disabled, -.nav-tabs .nav-pills>li>a.disabled, -.nav-tabs ul.nav.navbar-nav>li>a.disabled, -.nav-tabs .nav-link:disabled, -.nav-tabs>li>a:disabled, -.nav-tabs .nav-pills>li>a:disabled, -.nav-tabs ul.nav.navbar-nav>li>a:disabled { - color: var(--bs-nav-link-disabled-color); - background-color: transparent; - border-color: transparent -} - -.nav-tabs .nav-link.active, -.nav-tabs>li>a.active, -.nav-tabs .nav-pills>li>a.active, -.nav-tabs ul.nav.navbar-nav>li>a.active, -.nav-tabs .nav-item.show .nav-link, -.nav-tabs .nav-item.in .nav-link, -.nav-tabs .nav-item.show .nav-tabs>li>a, -.nav-tabs .nav-item.in .nav-tabs>li>a, -.nav-tabs .nav-item.show .nav-pills>li>a, -.nav-tabs .nav-item.in .nav-pills>li>a, -.nav-tabs>li.show .nav-link, -.nav-tabs>li.in .nav-link, -.nav-tabs>li.show .nav-tabs>li>a, -.nav-tabs>li.in .nav-tabs>li>a, -.nav-tabs>li.show .nav-pills>li>a, -.nav-tabs>li.in .nav-pills>li>a, -.nav-tabs .nav-pills>li.show .nav-link, -.nav-tabs .nav-pills>li.in .nav-link, -.nav-tabs .nav-pills>li.show .nav-tabs>li>a, -.nav-tabs .nav-pills>li.in .nav-tabs>li>a, -.nav-tabs .nav-pills>li.show .nav-pills>li>a, -.nav-tabs .nav-pills>li.in .nav-pills>li>a, -.nav-tabs .nav-item.show ul.nav.navbar-nav>li>a, -.nav-tabs .nav-item.in ul.nav.navbar-nav>li>a, -.nav-tabs>li.show ul.nav.navbar-nav>li>a, -.nav-tabs>li.in ul.nav.navbar-nav>li>a, -.nav-tabs .nav-pills>li.show ul.nav.navbar-nav>li>a, -.nav-tabs .nav-pills>li.in ul.nav.navbar-nav>li>a, -.nav-tabs ul.nav.navbar-nav>li.show:not(.dropdown) .nav-link, -.nav-tabs ul.nav.navbar-nav>li.in:not(.dropdown) .nav-link, -.nav-tabs ul.nav.navbar-nav>li.show:not(.dropdown) .nav-tabs>li>a, -.nav-tabs ul.nav.navbar-nav>li.in:not(.dropdown) .nav-tabs>li>a, -.nav-tabs ul.nav.navbar-nav>li.show:not(.dropdown) .nav-pills>li>a, -.nav-tabs ul.nav.navbar-nav>li.in:not(.dropdown) .nav-pills>li>a, -.nav-tabs ul.nav.navbar-nav>li.show:not(.dropdown) ul.nav.navbar-nav>li>a, -.nav-tabs ul.nav.navbar-nav>li.in:not(.dropdown) ul.nav.navbar-nav>li>a { - color: var(--bs-nav-tabs-link-active-color); - background-color: var(--bs-nav-tabs-link-active-bg); - border-color: var(--bs-nav-tabs-link-active-border-color) -} - -.nav-tabs .dropdown-menu { - margin-top: calc(-1 * var(--bs-nav-tabs-border-width)); - border-top-left-radius: 0; - border-top-right-radius: 0 -} - -.nav-pills { - --bs-nav-pills-border-radius: .375rem; - --bs-nav-pills-link-active-color: #8e8c84; - --bs-nav-pills-link-active-bg: #f8f5f0 -} - -.nav-pills .nav-link, -.nav-pills .nav-tabs>li>a, -.nav-pills>li>a, -.nav-pills ul.nav.navbar-nav>li>a { - background: none; - border: 0; - border-radius: var(--bs-nav-pills-border-radius) -} - -.nav-pills .nav-link:disabled, -.nav-pills .nav-tabs>li>a:disabled, -.nav-pills>li>a:disabled, -.nav-pills ul.nav.navbar-nav>li>a:disabled { - color: var(--bs-nav-link-disabled-color); - background-color: transparent; - border-color: transparent -} - -.nav-pills .nav-link.active, -.nav-pills .nav-tabs>li>a.active, -.nav-pills>li>a.active, -.nav-pills ul.nav.navbar-nav>li>a.active, -.nav-pills .show>.nav-link, -.nav-pills .in>.nav-link, -.nav-pills .nav-tabs>li.show>a, -.nav-pills .nav-tabs>li.in>a, -.nav-pills>li.show>a, -.nav-pills>li.in>a, -.nav-pills ul.nav.navbar-nav>li.show>a, -.nav-pills ul.nav.navbar-nav>li.in>a { - color: var(--bs-nav-pills-link-active-color); - background-color: var(--bs-nav-pills-link-active-bg); - background-image: var(--bs-gradient) -} - -.nav-fill>.nav-link, -.nav-tabs>li.nav-fill>a, -.nav-pills>li.nav-fill>a, -ul.nav.navbar-nav>li.nav-fill>a, -.nav-fill .nav-item, -.nav-fill .nav-tabs>li, -.nav-fill .nav-pills>li, -.nav-fill ul.nav.navbar-nav>li:not(.dropdown) { - flex: 1 1 auto; - -webkit-flex: 1 1 auto; - text-align: center -} - -.nav-justified>.nav-link, -.nav-tabs>li.nav-justified>a, -.nav-pills>li.nav-justified>a, -ul.nav.navbar-nav>li.nav-justified>a, -.nav-justified .nav-item, -.nav-justified .nav-tabs>li, -.nav-justified .nav-pills>li, -.nav-justified ul.nav.navbar-nav>li:not(.dropdown) { - flex-basis: 0; - -webkit-flex-basis: 0; - flex-grow: 1; - -webkit-flex-grow: 1; - text-align: center -} - -.nav-fill .nav-item .nav-link, -.nav-fill .nav-tabs>li .nav-link, -.nav-fill .nav-tabs>li>a, -.nav-fill .nav-pills>li .nav-link, -.nav-fill .nav-pills>li>a, -.nav-fill .nav-item ul.nav.navbar-nav>li>a, -.nav-fill .nav-tabs>li ul.nav.navbar-nav>li>a, -.nav-fill .nav-pills>li ul.nav.navbar-nav>li>a, -.nav-fill ul.nav.navbar-nav>li:not(.dropdown) .nav-link, -.nav-fill ul.nav.navbar-nav>li:not(.dropdown) .nav-tabs>li>a, -.nav-fill ul.nav.navbar-nav>li:not(.dropdown) .nav-pills>li>a, -.nav-fill ul.nav.navbar-nav>li:not(.dropdown) ul.nav.navbar-nav>li>a, -.nav-justified .nav-item .nav-link, -.nav-justified .nav-tabs>li .nav-link, -.nav-justified .nav-tabs>li>a, -.nav-justified .nav-pills>li .nav-link, -.nav-justified .nav-pills>li>a, -.nav-justified .nav-item ul.nav.navbar-nav>li>a, -.nav-justified .nav-tabs>li ul.nav.navbar-nav>li>a, -.nav-justified .nav-pills>li ul.nav.navbar-nav>li>a, -.nav-justified ul.nav.navbar-nav>li:not(.dropdown) .nav-link, -.nav-justified ul.nav.navbar-nav>li:not(.dropdown) .nav-tabs>li>a, -.nav-justified ul.nav.navbar-nav>li:not(.dropdown) .nav-pills>li>a, -.nav-justified ul.nav.navbar-nav>li:not(.dropdown) ul.nav.navbar-nav>li>a { - width: 100% -} - -.tab-content>.tab-pane { - display: none -} - -.tab-content>.active { - display: block -} - -.navbar { - --bs-navbar-padding-x: 0; - --bs-navbar-padding-y: .5rem; - --bs-navbar-color: rgba(255, 255, 255, 0.55); - --bs-navbar-hover-color: rgba(255, 255, 255, 0.7); - --bs-navbar-disabled-color: rgba(255, 255, 255, 0.3); - --bs-navbar-active-color: rgba(255, 255, 255, 0.9); - --bs-navbar-brand-padding-y: .3125rem; - --bs-navbar-brand-margin-end: 1rem; - --bs-navbar-brand-font-size: 1.25rem; - --bs-navbar-brand-color: rgba(255, 255, 255, 0.9); - --bs-navbar-brand-hover-color: rgba(255, 255, 255, 0.9); - --bs-navbar-nav-link-padding-x: .5rem; - --bs-navbar-toggler-padding-y: .25rem; - --bs-navbar-toggler-padding-x: .75rem; - --bs-navbar-toggler-font-size: 1.25rem; - --bs-navbar-toggler-icon-bg: url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 30 30'%3e%3cpath stroke='rgba%28255,255,255,0.55%29' stroke-linecap='round' stroke-miterlimit='10' stroke-width='2' d='M4 7h22M4 15h22M4 23h22'/%3e%3c/svg%3e"); - --bs-navbar-toggler-border-color: rgba(255, 255, 255, 0.1); - --bs-navbar-toggler-border-radius: .375rem; - --bs-navbar-toggler-focus-width: .25rem; - --bs-navbar-toggler-transition: box-shadow 0.15s ease-in-out; - position: relative; - display: flex; - display: -webkit-flex; - flex-wrap: wrap; - -webkit-flex-wrap: wrap; - align-items: center; - -webkit-align-items: center; - justify-content: space-between; - -webkit-justify-content: space-between; - padding: var(--bs-navbar-padding-y) var(--bs-navbar-padding-x); - background-image: var(--bs-gradient) -} - -.navbar>.container, -.navbar>.container-fluid, -.navbar>.container-sm, -.navbar>.container-md, -.navbar>.container-lg, -.navbar>.container-xl, -.navbar>.container-xxl { - display: flex; - display: -webkit-flex; - flex-wrap: inherit; - -webkit-flex-wrap: inherit; - align-items: center; - -webkit-align-items: center; - justify-content: space-between; - -webkit-justify-content: space-between -} - -.navbar-brand { - padding-top: var(--bs-navbar-brand-padding-y); - padding-bottom: var(--bs-navbar-brand-padding-y); - margin-right: var(--bs-navbar-brand-margin-end); - font-size: var(--bs-navbar-brand-font-size); - color: var(--bs-navbar-brand-color); - text-decoration: none; - -webkit-text-decoration: none; - -moz-text-decoration: none; - -ms-text-decoration: none; - -o-text-decoration: none; - white-space: nowrap -} - -.navbar-brand:hover, -.navbar-brand:focus { - color: var(--bs-navbar-brand-hover-color) -} - -.navbar-nav { - --bs-nav-link-padding-x: 0; - --bs-nav-link-padding-y: .5rem; - --bs-nav-link-font-weight: ; - --bs-nav-link-color: var(--bs-navbar-color); - --bs-nav-link-hover-color: var(--bs-navbar-hover-color); - --bs-nav-link-disabled-color: var(--bs-navbar-disabled-color); - display: flex; - display: -webkit-flex; - flex-direction: column; - -webkit-flex-direction: column; - padding-left: 0; - margin-bottom: 0; - list-style: none -} - -.navbar-nav .show>.nav-link, -.navbar-nav .in>.nav-link, -.navbar-nav .nav-tabs>li.show>a, -.navbar-nav .nav-tabs>li.in>a, -.navbar-nav .nav-pills>li.show>a, -.navbar-nav .nav-pills>li.in>a, -ul.nav.navbar-nav>li.show>a, -ul.nav.navbar-nav>li.in>a, -.navbar-nav .active>.nav-link, -.navbar-nav .nav-tabs>li.active>a, -.navbar-nav .nav-pills>li.active>a, -ul.nav.navbar-nav>li.active>a, -.navbar-nav .nav-link.active, -.navbar-nav .nav-tabs>li>a.active, -.navbar-nav .nav-pills>li>a.active, -ul.nav.navbar-nav>li>a.active { - color: var(--bs-navbar-active-color) -} - -.navbar-nav .dropdown-menu { - position: static -} - -.navbar-text { - padding-top: .5rem; - padding-bottom: .5rem; - color: var(--bs-navbar-color) -} - -.navbar-text a, -.navbar-text a:hover, -.navbar-text a:focus { - color: var(--bs-navbar-active-color) -} - -.navbar-collapse { - flex-basis: 100%; - -webkit-flex-basis: 100%; - flex-grow: 1; - -webkit-flex-grow: 1; - align-items: center; - -webkit-align-items: center -} - -.navbar-toggler, -.navbar-toggle { - padding: var(--bs-navbar-toggler-padding-y) var(--bs-navbar-toggler-padding-x); - font-size: var(--bs-navbar-toggler-font-size); - line-height: 1; - color: var(--bs-navbar-color); - background-color: transparent; - border: var(--bs-border-width) solid var(--bs-navbar-toggler-border-color); - border-radius: var(--bs-navbar-toggler-border-radius) -} - -.navbar-toggler:hover, -.navbar-toggle:hover { - text-decoration: none -} - -.navbar-toggler:focus, -.navbar-toggle:focus { - text-decoration: none; - outline: 0; - box-shadow: 0 0 0 var(--bs-navbar-toggler-focus-width) -} - -.navbar-toggler-icon, -.navbar-toggle>.icon-bar:last-child { - display: inline-block; - width: 1.5em; - height: 1.5em; - vertical-align: middle; - background-image: var(--bs-navbar-toggler-icon-bg); - background-repeat: no-repeat; - background-position: center; - background-size: 100% -} - -.navbar-nav-scroll { - max-height: var(--bs-scroll-height, 75vh); - overflow-y: auto -} - -@media (min-width: 576px) { - - .navbar-expand-sm, - .navbar:not(.navbar-expand):not(.navbar-expand-sm):not(.navbar-expand-md):not(.navbar-expand-lg):not(.navbar-expand-xl) { - flex-wrap: nowrap; - -webkit-flex-wrap: nowrap; - justify-content: flex-start; - -webkit-justify-content: flex-start - } - - .navbar-expand-sm .navbar-nav, - .navbar:not(.navbar-expand):not(.navbar-expand-sm):not(.navbar-expand-md):not(.navbar-expand-lg):not(.navbar-expand-xl) .navbar-nav { - flex-direction: row; - -webkit-flex-direction: row - } - - .navbar-expand-sm .navbar-nav .dropdown-menu, - .navbar:not(.navbar-expand):not(.navbar-expand-sm):not(.navbar-expand-md):not(.navbar-expand-lg):not(.navbar-expand-xl) .navbar-nav .dropdown-menu { - position: absolute - } - - .navbar-expand-sm .navbar-nav .nav-link, - .navbar-expand-sm .navbar-nav .nav-tabs>li>a, - .navbar-expand-sm .navbar-nav .nav-pills>li>a, - .navbar-expand-sm ul.nav.navbar-nav>li>a, - .navbar:not(.navbar-expand):not(.navbar-expand-sm):not(.navbar-expand-md):not(.navbar-expand-lg):not(.navbar-expand-xl) .navbar-nav .nav-link, - .navbar:not(.navbar-expand):not(.navbar-expand-sm):not(.navbar-expand-md):not(.navbar-expand-lg):not(.navbar-expand-xl) .navbar-nav .nav-tabs>li>a, - .navbar:not(.navbar-expand):not(.navbar-expand-sm):not(.navbar-expand-md):not(.navbar-expand-lg):not(.navbar-expand-xl) .navbar-nav .nav-pills>li>a, - .navbar:not(.navbar-expand):not(.navbar-expand-sm):not(.navbar-expand-md):not(.navbar-expand-lg):not(.navbar-expand-xl) ul.nav.navbar-nav>li>a { - padding-right: var(--bs-navbar-nav-link-padding-x); - padding-left: var(--bs-navbar-nav-link-padding-x) - } - - .navbar-expand-sm .navbar-nav-scroll, - .navbar:not(.navbar-expand):not(.navbar-expand-sm):not(.navbar-expand-md):not(.navbar-expand-lg):not(.navbar-expand-xl) .navbar-nav-scroll { - overflow: visible - } - - .navbar-expand-sm .navbar-collapse, - .navbar:not(.navbar-expand):not(.navbar-expand-sm):not(.navbar-expand-md):not(.navbar-expand-lg):not(.navbar-expand-xl) .navbar-collapse { - display: flex !important; - display: -webkit-flex !important; - flex-basis: auto; - -webkit-flex-basis: auto - } - - .navbar-expand-sm .navbar-toggler, - .navbar-expand-sm .navbar-toggle, - .navbar:not(.navbar-expand):not(.navbar-expand-sm):not(.navbar-expand-md):not(.navbar-expand-lg):not(.navbar-expand-xl) .navbar-toggler, - .navbar:not(.navbar-expand):not(.navbar-expand-sm):not(.navbar-expand-md):not(.navbar-expand-lg):not(.navbar-expand-xl) .navbar-toggle { - display: none - } - - .navbar-expand-sm .offcanvas, - .navbar:not(.navbar-expand):not(.navbar-expand-sm):not(.navbar-expand-md):not(.navbar-expand-lg):not(.navbar-expand-xl) .offcanvas { - position: static; - z-index: auto; - flex-grow: 1; - -webkit-flex-grow: 1; - width: auto !important; - height: auto !important; - visibility: visible !important; - background-color: transparent !important; - border: 0 !important; - transform: none !important - } - - .navbar-expand-sm .offcanvas .offcanvas-header, - .navbar:not(.navbar-expand):not(.navbar-expand-sm):not(.navbar-expand-md):not(.navbar-expand-lg):not(.navbar-expand-xl) .offcanvas .offcanvas-header { - display: none - } - - .navbar-expand-sm .offcanvas .offcanvas-body, - .navbar:not(.navbar-expand):not(.navbar-expand-sm):not(.navbar-expand-md):not(.navbar-expand-lg):not(.navbar-expand-xl) .offcanvas .offcanvas-body { - display: flex; - display: -webkit-flex; - flex-grow: 0; - -webkit-flex-grow: 0; - padding: 0; - overflow-y: visible - } -} - -@media (min-width: 768px) { - .navbar-expand-md { - flex-wrap: nowrap; - -webkit-flex-wrap: nowrap; - justify-content: flex-start; - -webkit-justify-content: flex-start - } - - .navbar-expand-md .navbar-nav { - flex-direction: row; - -webkit-flex-direction: row - } - - .navbar-expand-md .navbar-nav .dropdown-menu { - position: absolute - } - - .navbar-expand-md .navbar-nav .nav-link, - .navbar-expand-md .navbar-nav .nav-tabs>li>a, - .navbar-expand-md .navbar-nav .nav-pills>li>a, - .navbar-expand-md ul.nav.navbar-nav>li>a { - padding-right: var(--bs-navbar-nav-link-padding-x); - padding-left: var(--bs-navbar-nav-link-padding-x) - } - - .navbar-expand-md .navbar-nav-scroll { - overflow: visible - } - - .navbar-expand-md .navbar-collapse { - display: flex !important; - display: -webkit-flex !important; - flex-basis: auto; - -webkit-flex-basis: auto - } - - .navbar-expand-md .navbar-toggler, - .navbar-expand-md .navbar-toggle { - display: none - } - - .navbar-expand-md .offcanvas { - position: static; - z-index: auto; - flex-grow: 1; - -webkit-flex-grow: 1; - width: auto !important; - height: auto !important; - visibility: visible !important; - background-color: transparent !important; - border: 0 !important; - transform: none !important - } - - .navbar-expand-md .offcanvas .offcanvas-header { - display: none - } - - .navbar-expand-md .offcanvas .offcanvas-body { - display: flex; - display: -webkit-flex; - flex-grow: 0; - -webkit-flex-grow: 0; - padding: 0; - overflow-y: visible - } -} - -@media (min-width: 992px) { - .navbar-expand-lg { - flex-wrap: nowrap; - -webkit-flex-wrap: nowrap; - justify-content: flex-start; - -webkit-justify-content: flex-start - } - - .navbar-expand-lg .navbar-nav { - flex-direction: row; - -webkit-flex-direction: row - } - - .navbar-expand-lg .navbar-nav .dropdown-menu { - position: absolute - } - - .navbar-expand-lg .navbar-nav .nav-link, - .navbar-expand-lg .navbar-nav .nav-tabs>li>a, - .navbar-expand-lg .navbar-nav .nav-pills>li>a, - .navbar-expand-lg ul.nav.navbar-nav>li>a { - padding-right: var(--bs-navbar-nav-link-padding-x); - padding-left: var(--bs-navbar-nav-link-padding-x) - } - - .navbar-expand-lg .navbar-nav-scroll { - overflow: visible - } - - .navbar-expand-lg .navbar-collapse { - display: flex !important; - display: -webkit-flex !important; - flex-basis: auto; - -webkit-flex-basis: auto - } - - .navbar-expand-lg .navbar-toggler, - .navbar-expand-lg .navbar-toggle { - display: none - } - - .navbar-expand-lg .offcanvas { - position: static; - z-index: auto; - flex-grow: 1; - -webkit-flex-grow: 1; - width: auto !important; - height: auto !important; - visibility: visible !important; - background-color: transparent !important; - border: 0 !important; - transform: none !important - } - - .navbar-expand-lg .offcanvas .offcanvas-header { - display: none - } - - .navbar-expand-lg .offcanvas .offcanvas-body { - display: flex; - display: -webkit-flex; - flex-grow: 0; - -webkit-flex-grow: 0; - padding: 0; - overflow-y: visible - } -} - -@media (min-width: 1200px) { - .navbar-expand-xl { - flex-wrap: nowrap; - -webkit-flex-wrap: nowrap; - justify-content: flex-start; - -webkit-justify-content: flex-start - } - - .navbar-expand-xl .navbar-nav { - flex-direction: row; - -webkit-flex-direction: row - } - - .navbar-expand-xl .navbar-nav .dropdown-menu { - position: absolute - } - - .navbar-expand-xl .navbar-nav .nav-link, - .navbar-expand-xl .navbar-nav .nav-tabs>li>a, - .navbar-expand-xl .navbar-nav .nav-pills>li>a, - .navbar-expand-xl ul.nav.navbar-nav>li>a { - padding-right: var(--bs-navbar-nav-link-padding-x); - padding-left: var(--bs-navbar-nav-link-padding-x) - } - - .navbar-expand-xl .navbar-nav-scroll { - overflow: visible - } - - .navbar-expand-xl .navbar-collapse { - display: flex !important; - display: -webkit-flex !important; - flex-basis: auto; - -webkit-flex-basis: auto - } - - .navbar-expand-xl .navbar-toggler, - .navbar-expand-xl .navbar-toggle { - display: none - } - - .navbar-expand-xl .offcanvas { - position: static; - z-index: auto; - flex-grow: 1; - -webkit-flex-grow: 1; - width: auto !important; - height: auto !important; - visibility: visible !important; - background-color: transparent !important; - border: 0 !important; - transform: none !important - } - - .navbar-expand-xl .offcanvas .offcanvas-header { - display: none - } - - .navbar-expand-xl .offcanvas .offcanvas-body { - display: flex; - display: -webkit-flex; - flex-grow: 0; - -webkit-flex-grow: 0; - padding: 0; - overflow-y: visible - } -} - -@media (min-width: 1400px) { - .navbar-expand-xxl { - flex-wrap: nowrap; - -webkit-flex-wrap: nowrap; - justify-content: flex-start; - -webkit-justify-content: flex-start - } - - .navbar-expand-xxl .navbar-nav { - flex-direction: row; - -webkit-flex-direction: row - } - - .navbar-expand-xxl .navbar-nav .dropdown-menu { - position: absolute - } - - .navbar-expand-xxl .navbar-nav .nav-link, - .navbar-expand-xxl .navbar-nav .nav-tabs>li>a, - .navbar-expand-xxl .navbar-nav .nav-pills>li>a, - .navbar-expand-xxl ul.nav.navbar-nav>li>a { - padding-right: var(--bs-navbar-nav-link-padding-x); - padding-left: var(--bs-navbar-nav-link-padding-x) - } - - .navbar-expand-xxl .navbar-nav-scroll { - overflow: visible - } - - .navbar-expand-xxl .navbar-collapse { - display: flex !important; - display: -webkit-flex !important; - flex-basis: auto; - -webkit-flex-basis: auto - } - - .navbar-expand-xxl .navbar-toggler, - .navbar-expand-xxl .navbar-toggle { - display: none - } - - .navbar-expand-xxl .offcanvas { - position: static; - z-index: auto; - flex-grow: 1; - -webkit-flex-grow: 1; - width: auto !important; - height: auto !important; - visibility: visible !important; - background-color: transparent !important; - border: 0 !important; - transform: none !important - } - - .navbar-expand-xxl .offcanvas .offcanvas-header { - display: none - } - - .navbar-expand-xxl .offcanvas .offcanvas-body { - display: flex; - display: -webkit-flex; - flex-grow: 0; - -webkit-flex-grow: 0; - padding: 0; - overflow-y: visible - } -} - -.navbar-expand { - flex-wrap: nowrap; - -webkit-flex-wrap: nowrap; - justify-content: flex-start; - -webkit-justify-content: flex-start -} - -.navbar-expand .navbar-nav { - flex-direction: row; - -webkit-flex-direction: row -} - -.navbar-expand .navbar-nav .dropdown-menu { - position: absolute -} - -.navbar-expand .navbar-nav .nav-link, -.navbar-expand .navbar-nav .nav-tabs>li>a, -.navbar-expand .navbar-nav .nav-pills>li>a, -.navbar-expand ul.nav.navbar-nav>li>a { - padding-right: var(--bs-navbar-nav-link-padding-x); - padding-left: var(--bs-navbar-nav-link-padding-x) -} - -.navbar-expand .navbar-nav-scroll { - overflow: visible -} - -.navbar-expand .navbar-collapse { - display: flex !important; - display: -webkit-flex !important; - flex-basis: auto; - -webkit-flex-basis: auto -} - -.navbar-expand .navbar-toggler, -.navbar-expand .navbar-toggle { - display: none -} - -.navbar-expand .offcanvas { - position: static; - z-index: auto; - flex-grow: 1; - -webkit-flex-grow: 1; - width: auto !important; - height: auto !important; - visibility: visible !important; - background-color: transparent !important; - border: 0 !important; - transform: none !important -} - -.navbar-expand .offcanvas .offcanvas-header { - display: none -} - -.navbar-expand .offcanvas .offcanvas-body { - display: flex; - display: -webkit-flex; - flex-grow: 0; - -webkit-flex-grow: 0; - padding: 0; - overflow-y: visible -} - -.navbar-light, -.navbar.navbar-default { - background-color: #3e3f3a -} - -.navbar-dark, -.navbar.navbar-inverse { - background-color: #93c54b; - --bs-navbar-color: rgba(255, 255, 255, 0.55); - --bs-navbar-hover-color: rgba(255, 255, 255, 0.75); - --bs-navbar-disabled-color: rgba(255, 255, 255, 0.25); - --bs-navbar-active-color: #fff; - --bs-navbar-brand-color: #fff; - --bs-navbar-brand-hover-color: #fff; - --bs-navbar-toggler-border-color: rgba(255, 255, 255, 0.1); - --bs-navbar-toggler-icon-bg: url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 30 30'%3e%3cpath stroke='rgba%28255,255,255,0.55%29' stroke-linecap='round' stroke-miterlimit='10' stroke-width='2' d='M4 7h22M4 15h22M4 23h22'/%3e%3c/svg%3e") -} - -.card, -.well { - --bs-card-spacer-y: 1rem; - --bs-card-spacer-x: 1rem; - --bs-card-title-spacer-y: .5rem; - --bs-card-border-width: 1px; - --bs-card-border-color: rgba(223, 215, 202, 0.75); - --bs-card-border-radius: .375rem; - --bs-card-box-shadow: ; - --bs-card-inner-border-radius: calc(.375rem - 1px); - --bs-card-cap-padding-y: .5rem; - --bs-card-cap-padding-x: 1rem; - --bs-card-cap-bg: rgba(248, 245, 240, 0.25); - --bs-card-cap-color: ; - --bs-card-height: ; - --bs-card-color: ; - --bs-card-bg: #fff; - --bs-card-img-overlay-padding: 1rem; - --bs-card-group-margin: .75rem; - position: relative; - display: flex; - display: -webkit-flex; - flex-direction: column; - -webkit-flex-direction: column; - min-width: 0; - height: var(--bs-card-height); - word-wrap: break-word; - background-color: var(--bs-card-bg); - background-clip: border-box; - border: var(--bs-card-border-width) solid var(--bs-card-border-color); - border-radius: var(--bs-card-border-radius) -} - -.card>hr, -.well>hr { - margin-right: 0; - margin-left: 0 -} - -.card>.list-group, -.well>.list-group { - border-top: inherit; - border-bottom: inherit -} - -.card>.list-group:first-child, -.well>.list-group:first-child { - border-top-width: 0; - border-top-left-radius: var(--bs-card-inner-border-radius); - border-top-right-radius: var(--bs-card-inner-border-radius) -} - -.card>.list-group:last-child, -.well>.list-group:last-child { - border-bottom-width: 0; - border-bottom-right-radius: var(--bs-card-inner-border-radius); - border-bottom-left-radius: var(--bs-card-inner-border-radius) -} - -.card>.card-header+.list-group, -.well>.card-header+.list-group, -.card>.list-group+.card-footer, -.well>.list-group+.card-footer { - border-top: 0 -} - -.card-body { - flex: 1 1 auto; - -webkit-flex: 1 1 auto; - padding: var(--bs-card-spacer-y) var(--bs-card-spacer-x); - color: var(--bs-card-color) -} - -.card-title { - margin-bottom: var(--bs-card-title-spacer-y) -} - -.card-subtitle { - margin-top: calc(-.5 * var(--bs-card-title-spacer-y)); - margin-bottom: 0 -} - -.card-text:last-child { - margin-bottom: 0 -} - -.card-link+.card-link { - margin-left: var(--bs-card-spacer-x) -} - -.card-header { - padding: var(--bs-card-cap-padding-y) var(--bs-card-cap-padding-x); - margin-bottom: 0; - color: var(--bs-card-cap-color); - background-color: var(--bs-card-cap-bg); - border-bottom: var(--bs-card-border-width) solid var(--bs-card-border-color) -} - -.card-header:first-child { - border-radius: var(--bs-card-inner-border-radius) var(--bs-card-inner-border-radius) 0 0 -} - -.card-footer { - padding: var(--bs-card-cap-padding-y) var(--bs-card-cap-padding-x); - color: var(--bs-card-cap-color); - background-color: var(--bs-card-cap-bg); - border-top: var(--bs-card-border-width) solid var(--bs-card-border-color) -} - -.card-footer:last-child { - border-radius: 0 0 var(--bs-card-inner-border-radius) var(--bs-card-inner-border-radius) -} - -.card-header-tabs { - margin-right: calc(-.5 * var(--bs-card-cap-padding-x)); - margin-bottom: calc(-1 * var(--bs-card-cap-padding-y)); - margin-left: calc(-.5 * var(--bs-card-cap-padding-x)); - border-bottom: 0 -} - -.card-header-tabs .nav-link.active, -.card-header-tabs .nav-tabs>li>a.active, -.card-header-tabs .nav-pills>li>a.active, -.card-header-tabs ul.nav.navbar-nav>li>a.active { - background-color: var(--bs-card-bg); - border-bottom-color: var(--bs-card-bg) -} - -.card-header-pills { - margin-right: calc(-.5 * var(--bs-card-cap-padding-x)); - margin-left: calc(-.5 * var(--bs-card-cap-padding-x)) -} - -.card-img-overlay { - position: absolute; - top: 0; - right: 0; - bottom: 0; - left: 0; - padding: var(--bs-card-img-overlay-padding); - border-radius: var(--bs-card-inner-border-radius) -} - -.card-img, -.card-img-top, -.card-img-bottom { - width: 100% -} - -.card-img, -.card-img-top { - border-top-left-radius: var(--bs-card-inner-border-radius); - border-top-right-radius: var(--bs-card-inner-border-radius) -} - -.card-img, -.card-img-bottom { - border-bottom-right-radius: var(--bs-card-inner-border-radius); - border-bottom-left-radius: var(--bs-card-inner-border-radius) -} - -.card-group>.card, -.card-group>.well { - margin-bottom: var(--bs-card-group-margin) -} - -@media (min-width: 576px) { - .card-group { - display: flex; - display: -webkit-flex; - flex-flow: row wrap; - -webkit-flex-flow: row wrap - } - - .card-group>.card, - .card-group>.well { - flex: 1 0 0%; - -webkit-flex: 1 0 0%; - margin-bottom: 0 - } - - .card-group>.card+.card, - .card-group>.well+.card, - .card-group>.card+.well, - .card-group>.well+.well { - margin-left: 0; - border-left: 0 - } - - .card-group>.card:not(:last-child), - .card-group>.well:not(:last-child) { - border-top-right-radius: 0; - border-bottom-right-radius: 0 - } - - .card-group>.card:not(:last-child) .card-img-top, - .card-group>.well:not(:last-child) .card-img-top, - .card-group>.card:not(:last-child) .card-header, - .card-group>.well:not(:last-child) .card-header { - border-top-right-radius: 0 - } - - .card-group>.card:not(:last-child) .card-img-bottom, - .card-group>.well:not(:last-child) .card-img-bottom, - .card-group>.card:not(:last-child) .card-footer, - .card-group>.well:not(:last-child) .card-footer { - border-bottom-right-radius: 0 - } - - .card-group>.card:not(:first-child), - .card-group>.well:not(:first-child) { - border-top-left-radius: 0; - border-bottom-left-radius: 0 - } - - .card-group>.card:not(:first-child) .card-img-top, - .card-group>.well:not(:first-child) .card-img-top, - .card-group>.card:not(:first-child) .card-header, - .card-group>.well:not(:first-child) .card-header { - border-top-left-radius: 0 - } - - .card-group>.card:not(:first-child) .card-img-bottom, - .card-group>.well:not(:first-child) .card-img-bottom, - .card-group>.card:not(:first-child) .card-footer, - .card-group>.well:not(:first-child) .card-footer { - border-bottom-left-radius: 0 - } -} - -.accordion { - --bs-accordion-color: #3e3f3a; - --bs-accordion-bg: #fff; - --bs-accordion-transition: color 0.15s ease-in-out, background-color 0.15s ease-in-out, border-color 0.15s ease-in-out, box-shadow 0.15s ease-in-out, border-radius 0.15s ease; - --bs-accordion-border-color: var(--bs-border-color); - --bs-accordion-border-width: 1px; - --bs-accordion-border-radius: .375rem; - --bs-accordion-inner-border-radius: calc(.375rem - 1px); - --bs-accordion-btn-padding-x: 1.25rem; - --bs-accordion-btn-padding-y: 1rem; - --bs-accordion-btn-color: #3e3f3a; - --bs-accordion-btn-bg: var(--bs-accordion-bg); - --bs-accordion-btn-icon: url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 16 16' fill='%233e3f3a'%3e%3cpath fill-rule='evenodd' d='M1.646 4.646a.5.5 0 0 1 .708 0L8 10.293l5.646-5.647a.5.5 0 0 1 .708.708l-6 6a.5.5 0 0 1-.708 0l-6-6a.5.5 0 0 1 0-.708z'/%3e%3c/svg%3e"); - --bs-accordion-btn-icon-width: 1.25rem; - --bs-accordion-btn-icon-transform: rotate(-180deg); - --bs-accordion-btn-icon-transition: transform 0.2s ease-in-out; - --bs-accordion-btn-active-icon: url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 16 16' fill=''%3e%3cpath fill-rule='evenodd' d='M1.646 4.646a.5.5 0 0 1 .708 0L8 10.293l5.646-5.647a.5.5 0 0 1 .708.708l-6 6a.5.5 0 0 1-.708 0l-6-6a.5.5 0 0 1 0-.708z'/%3e%3c/svg%3e"); - --bs-accordion-btn-focus-border-color: #99aec4; - --bs-accordion-btn-focus-box-shadow: 0 0 0 .25rem rgba(50, 93, 136, 0.25); - --bs-accordion-body-padding-x: 1.25rem; - --bs-accordion-body-padding-y: 1rem; - --bs-accordion-active-color: ; - --bs-accordion-active-bg: -} - -.accordion-button { - position: relative; - display: flex; - display: -webkit-flex; - align-items: center; - -webkit-align-items: center; - width: 100%; - padding: var(--bs-accordion-btn-padding-y) var(--bs-accordion-btn-padding-x); - font-size: 1rem; - color: var(--bs-accordion-btn-color); - text-align: left; - background-color: var(--bs-accordion-btn-bg); - border: 0; - border-radius: 0; - overflow-anchor: none -} - -.accordion-button:not(.collapsed) { - color: var(--bs-accordion-active-color); - background-color: var(--bs-accordion-active-bg); - box-shadow: inset 0 calc(-1 * var(--bs-accordion-border-width)) 0 var(--bs-accordion-border-color) -} - -.accordion-button:not(.collapsed)::after { - background-image: var(--bs-accordion-btn-active-icon); - transform: var(--bs-accordion-btn-icon-transform) -} - -.accordion-button::after { - flex-shrink: 0; - -webkit-flex-shrink: 0; - width: var(--bs-accordion-btn-icon-width); - height: var(--bs-accordion-btn-icon-width); - margin-left: auto; - content: ""; - background-image: var(--bs-accordion-btn-icon); - background-repeat: no-repeat; - background-size: var(--bs-accordion-btn-icon-width) -} - -.accordion-button:hover { - z-index: 2 -} - -.accordion-button:focus { - z-index: 3; - border-color: var(--bs-accordion-btn-focus-border-color); - outline: 0; - box-shadow: var(--bs-accordion-btn-focus-box-shadow) -} - -.accordion-header { - margin-bottom: 0 -} - -.accordion-item { - color: var(--bs-accordion-color); - background-color: var(--bs-accordion-bg); - border: var(--bs-accordion-border-width) solid var(--bs-accordion-border-color) -} - -.accordion-item:first-of-type { - border-top-left-radius: var(--bs-accordion-border-radius); - border-top-right-radius: var(--bs-accordion-border-radius) -} - -.accordion-item:first-of-type .accordion-button { - border-top-left-radius: var(--bs-accordion-inner-border-radius); - border-top-right-radius: var(--bs-accordion-inner-border-radius) -} - -.accordion-item:not(:first-of-type) { - border-top: 0 -} - -.accordion-item:last-of-type { - border-bottom-right-radius: var(--bs-accordion-border-radius); - border-bottom-left-radius: var(--bs-accordion-border-radius) -} - -.accordion-item:last-of-type .accordion-button.collapsed { - border-bottom-right-radius: var(--bs-accordion-inner-border-radius); - border-bottom-left-radius: var(--bs-accordion-inner-border-radius) -} - -.accordion-item:last-of-type .accordion-collapse { - border-bottom-right-radius: var(--bs-accordion-border-radius); - border-bottom-left-radius: var(--bs-accordion-border-radius) -} - -.accordion-body { - padding: var(--bs-accordion-body-padding-y) var(--bs-accordion-body-padding-x) -} - -.accordion-flush .accordion-collapse, -.bslib-sidebar-layout>.sidebar>.sidebar-content>.accordion .accordion-collapse { - border-width: 0 -} - -.accordion-flush .accordion-item, -.bslib-sidebar-layout>.sidebar>.sidebar-content>.accordion .accordion-item { - border-right: 0; - border-left: 0; - border-radius: 0 -} - -.accordion-flush .accordion-item:first-child, -.bslib-sidebar-layout>.sidebar>.sidebar-content>.accordion .accordion-item:first-child { - border-top: 0 -} - -.accordion-flush .accordion-item:last-child, -.bslib-sidebar-layout>.sidebar>.sidebar-content>.accordion .accordion-item:last-child { - border-bottom: 0 -} - -.accordion-flush .accordion-item .accordion-button, -.bslib-sidebar-layout>.sidebar>.sidebar-content>.accordion .accordion-item .accordion-button, -.accordion-flush .accordion-item .accordion-button.collapsed { - border-radius: 0 -} - -.breadcrumb { - --bs-breadcrumb-padding-x: .75rem; - --bs-breadcrumb-padding-y: .375rem; - --bs-breadcrumb-margin-bottom: 1rem; - --bs-breadcrumb-bg: #f8f5f0; - --bs-breadcrumb-border-radius: .25rem; - --bs-breadcrumb-divider-color: #8e8c84; - --bs-breadcrumb-item-padding-x: .5rem; - --bs-breadcrumb-item-active-color: #8e8c84; - display: flex; - display: -webkit-flex; - flex-wrap: wrap; - -webkit-flex-wrap: wrap; - padding: var(--bs-breadcrumb-padding-y) var(--bs-breadcrumb-padding-x); - margin-bottom: var(--bs-breadcrumb-margin-bottom); - font-size: var(--bs-breadcrumb-font-size); - list-style: none; - background-color: var(--bs-breadcrumb-bg); - border-radius: var(--bs-breadcrumb-border-radius) -} - -.breadcrumb-item+.breadcrumb-item { - padding-left: var(--bs-breadcrumb-item-padding-x) -} - -.breadcrumb-item+.breadcrumb-item::before { - float: left; - padding-right: var(--bs-breadcrumb-item-padding-x); - color: var(--bs-breadcrumb-divider-color); - content: var(--bs-breadcrumb-divider, "/") - /* rtl: var(--bs-breadcrumb-divider, "/") */ -} - -.breadcrumb-item.active { - color: var(--bs-breadcrumb-item-active-color) -} - -.pagination { - --bs-pagination-padding-x: .75rem; - --bs-pagination-padding-y: .375rem; - --bs-pagination-font-size: 1rem; - --bs-pagination-color: #8e8c84; - --bs-pagination-bg: #f8f5f0; - --bs-pagination-border-width: 1px; - --bs-pagination-border-color: #dfd7ca; - --bs-pagination-border-radius: .375rem; - --bs-pagination-hover-color: #8e8c84; - --bs-pagination-hover-bg: #f8f5f0; - --bs-pagination-hover-border-color: #dfd7ca; - --bs-pagination-focus-color: var(--bs-link-hover-color); - --bs-pagination-focus-bg: #f8f5f0; - --bs-pagination-focus-box-shadow: 0 0 0 .25rem rgba(50, 93, 136, 0.25); - --bs-pagination-active-color: #8e8c84; - --bs-pagination-active-bg: #dfd7ca; - --bs-pagination-active-border-color: #dfd7ca; - --bs-pagination-disabled-color: #dfd7ca; - --bs-pagination-disabled-bg: #f8f5f0; - --bs-pagination-disabled-border-color: #dfd7ca; - display: flex; - display: -webkit-flex; - padding-left: 0; - list-style: none -} - -.page-link { - position: relative; - display: block; - padding: var(--bs-pagination-padding-y) var(--bs-pagination-padding-x); - font-size: var(--bs-pagination-font-size); - color: var(--bs-pagination-color); - text-decoration: none; - -webkit-text-decoration: none; - -moz-text-decoration: none; - -ms-text-decoration: none; - -o-text-decoration: none; - background-color: var(--bs-pagination-bg); - border: var(--bs-pagination-border-width) solid var(--bs-pagination-border-color) -} - -.page-link:hover { - z-index: 2; - color: var(--bs-pagination-hover-color); - background-color: var(--bs-pagination-hover-bg); - border-color: var(--bs-pagination-hover-border-color) -} - -.page-link:focus { - z-index: 3; - color: var(--bs-pagination-focus-color); - background-color: var(--bs-pagination-focus-bg); - outline: 0; - box-shadow: var(--bs-pagination-focus-box-shadow) -} - -.page-link.active, -.active>.page-link { - z-index: 3; - color: var(--bs-pagination-active-color); - background-color: var(--bs-pagination-active-bg); - background-image: var(--bs-gradient); - border-color: var(--bs-pagination-active-border-color) -} - -.page-link.disabled, -.disabled>.page-link { - color: var(--bs-pagination-disabled-color); - pointer-events: none; - background-color: var(--bs-pagination-disabled-bg); - border-color: var(--bs-pagination-disabled-border-color) -} - -.page-item:not(:first-child) .page-link { - margin-left: -1px -} - -.page-item:first-child .page-link { - border-top-left-radius: var(--bs-pagination-border-radius); - border-bottom-left-radius: var(--bs-pagination-border-radius) -} - -.page-item:last-child .page-link { - border-top-right-radius: var(--bs-pagination-border-radius); - border-bottom-right-radius: var(--bs-pagination-border-radius) -} - -.pagination-lg { - --bs-pagination-padding-x: 1.5rem; - --bs-pagination-padding-y: .75rem; - --bs-pagination-font-size: 1.25rem; - --bs-pagination-border-radius: .5rem -} - -.pagination-sm { - --bs-pagination-padding-x: .5rem; - --bs-pagination-padding-y: .25rem; - --bs-pagination-font-size: .875rem; - --bs-pagination-border-radius: .25rem -} - -.badge { - --bs-badge-padding-x: .65em; - --bs-badge-padding-y: .35em; - --bs-badge-font-size: .75em; - --bs-badge-font-weight: 700; - --bs-badge-color: #fff; - --bs-badge-border-radius: .375rem; - display: inline-block; - padding: var(--bs-badge-padding-y) var(--bs-badge-padding-x); - font-size: var(--bs-badge-font-size); - font-weight: var(--bs-badge-font-weight); - line-height: 1; - color: var(--bs-badge-color); - text-align: center; - white-space: nowrap; - vertical-align: baseline; - border-radius: var(--bs-badge-border-radius); - background-image: var(--bs-gradient) -} - -.badge:empty { - display: none -} - -.btn .badge { - position: relative; - top: -1px -} - -.alert { - --bs-alert-bg: transparent; - --bs-alert-padding-x: 1rem; - --bs-alert-padding-y: 1rem; - --bs-alert-margin-bottom: 1rem; - --bs-alert-color: inherit; - --bs-alert-border-color: transparent; - --bs-alert-border: 1px solid var(--bs-alert-border-color); - --bs-alert-border-radius: .375rem; - position: relative; - padding: var(--bs-alert-padding-y) var(--bs-alert-padding-x); - margin-bottom: var(--bs-alert-margin-bottom); - color: var(--bs-alert-color); - background-color: var(--bs-alert-bg); - border: var(--bs-alert-border); - border-radius: var(--bs-alert-border-radius) -} - -.alert-heading { - color: inherit -} - -.alert-link { - font-weight: 700 -} - -.alert-dismissible { - padding-right: 3rem -} - -.alert-dismissible .btn-close { - position: absolute; - top: 0; - right: 0; - z-index: 2; - padding: 1.25rem 1rem -} - -.alert-default { - --bs-alert-color: #55544f; - --bs-alert-bg: #e8e8e6; - --bs-alert-border-color: #ddddda; - background-image: var(--bs-gradient) -} - -.alert-default .alert-link { - color: #44433f -} - -.alert-primary { - --bs-alert-color: #1e3852; - --bs-alert-bg: #d6dfe7; - --bs-alert-border-color: #c2cedb; - background-image: var(--bs-gradient) -} - -.alert-primary .alert-link { - color: #182d42 -} - -.alert-secondary { - --bs-alert-color: #55544f; - --bs-alert-bg: #e8e8e6; - --bs-alert-border-color: #ddddda; - background-image: var(--bs-gradient) -} - -.alert-secondary .alert-link { - color: #44433f -} - -.alert-success { - --bs-alert-color: #58762d; - --bs-alert-bg: #e9f3db; - --bs-alert-border-color: #dfeec9; - background-image: var(--bs-gradient) -} - -.alert-success .alert-link { - color: #465e24 -} - -.alert-info { - --bs-alert-color: #196786; - --bs-alert-bg: #d4eef9; - --bs-alert-border-color: #bfe6f6; - background-image: var(--bs-gradient) -} - -.alert-info .alert-link { - color: #14526b -} - -.alert-warning { - --bs-alert-color: #924a24; - --bs-alert-bg: #fde5d8; - --bs-alert-border-color: #fcd8c5; - background-image: var(--bs-gradient) -} - -.alert-warning .alert-link { - color: #753b1d -} - -.alert-danger { - --bs-alert-color: #82322f; - --bs-alert-bg: #f7dddc; - --bs-alert-border-color: #f4cbca; - background-image: var(--bs-gradient) -} - -.alert-danger .alert-link { - color: #682826 -} - -.alert-light { - --bs-alert-color: #959390; - --bs-alert-bg: #fefdfc; - --bs-alert-border-color: #fdfcfb; - background-image: var(--bs-gradient) -} - -.alert-light .alert-link { - color: #777673 -} - -.alert-dark { - --bs-alert-color: #252623; - --bs-alert-bg: #d8d9d8; - --bs-alert-border-color: #c5c5c4; - background-image: var(--bs-gradient) -} - -.alert-dark .alert-link { - color: #1e1e1c -} - -.progress { - --bs-progress-height: 1rem; - --bs-progress-font-size: .75rem; - --bs-progress-bg: #dfd7ca; - --bs-progress-border-radius: 10px; - --bs-progress-box-shadow: inset 0 1px 2px rgba(0, 0, 0, 0.075); - --bs-progress-bar-color: #325d88; - --bs-progress-bar-bg: #325d88; - --bs-progress-bar-transition: width 0.6s ease; - display: flex; - display: -webkit-flex; - height: var(--bs-progress-height); - overflow: hidden; - font-size: var(--bs-progress-font-size); - background-color: var(--bs-progress-bg); - border-radius: var(--bs-progress-border-radius) -} - -.progress-bar { - display: flex; - display: -webkit-flex; - flex-direction: column; - -webkit-flex-direction: column; - justify-content: center; - -webkit-justify-content: center; - overflow: hidden; - color: var(--bs-progress-bar-color); - text-align: center; - white-space: nowrap; - background-color: var(--bs-progress-bar-bg) -} - -.progress-bar-striped { - background-image: linear-gradient(45deg, rgba(255, 255, 255, 0.15) 25%, transparent 25%, transparent 50%, rgba(255, 255, 255, 0.15) 50%, rgba(255, 255, 255, 0.15) 75%, transparent 75%, transparent); - background-size: var(--bs-progress-height) var(--bs-progress-height) -} - -.list-group { - --bs-list-group-color: #212529; - --bs-list-group-bg: #fff; - --bs-list-group-border-color: #dfd7ca; - --bs-list-group-border-width: 1px; - --bs-list-group-border-radius: .375rem; - --bs-list-group-item-padding-x: 1rem; - --bs-list-group-item-padding-y: .5rem; - --bs-list-group-action-color: #3e3f3a; - --bs-list-group-action-hover-color: #3e3f3a; - --bs-list-group-action-hover-bg: #f8f5f0; - --bs-list-group-action-active-color: #3e3f3a; - --bs-list-group-action-active-bg: #dfd7ca; - --bs-list-group-disabled-color: #98978b; - --bs-list-group-disabled-bg: #fff; - --bs-list-group-active-color: #3e3f3a; - --bs-list-group-active-bg: #f8f5f0; - --bs-list-group-active-border-color: #dfd7ca; - display: flex; - display: -webkit-flex; - flex-direction: column; - -webkit-flex-direction: column; - padding-left: 0; - margin-bottom: 0; - border-radius: var(--bs-list-group-border-radius) -} - -.list-group-numbered { - list-style-type: none; - counter-reset: section -} - -.list-group-numbered>.list-group-item::before { - content: counters(section, ".") ". "; - counter-increment: section -} - -.list-group-item-action { - width: 100%; - color: var(--bs-list-group-action-color); - text-align: inherit -} - -.list-group-item-action:hover, -.list-group-item-action:focus { - z-index: 1; - color: var(--bs-list-group-action-hover-color); - text-decoration: none; - background-color: var(--bs-list-group-action-hover-bg) -} - -.list-group-item-action:active { - color: var(--bs-list-group-action-active-color); - background-color: var(--bs-list-group-action-active-bg) -} - -.list-group-item { - position: relative; - display: block; - padding: var(--bs-list-group-item-padding-y) var(--bs-list-group-item-padding-x); - color: var(--bs-list-group-color); - text-decoration: none; - -webkit-text-decoration: none; - -moz-text-decoration: none; - -ms-text-decoration: none; - -o-text-decoration: none; - background-color: var(--bs-list-group-bg); - border: var(--bs-list-group-border-width) solid var(--bs-list-group-border-color) -} - -.list-group-item:first-child { - border-top-left-radius: inherit; - border-top-right-radius: inherit -} - -.list-group-item:last-child { - border-bottom-right-radius: inherit; - border-bottom-left-radius: inherit -} - -.list-group-item.disabled, -.list-group-item:disabled { - color: var(--bs-list-group-disabled-color); - pointer-events: none; - background-color: var(--bs-list-group-disabled-bg) -} - -.list-group-item.active { - z-index: 2; - color: var(--bs-list-group-active-color); - background-color: var(--bs-list-group-active-bg); - border-color: var(--bs-list-group-active-border-color) -} - -.list-group-item+.list-group-item { - border-top-width: 0 -} - -.list-group-item+.list-group-item.active { - margin-top: calc(-1 * var(--bs-list-group-border-width)); - border-top-width: var(--bs-list-group-border-width) -} - -.list-group-horizontal { - flex-direction: row; - -webkit-flex-direction: row -} - -.list-group-horizontal>.list-group-item:first-child:not(:last-child) { - border-bottom-left-radius: var(--bs-list-group-border-radius); - border-top-right-radius: 0 -} - -.list-group-horizontal>.list-group-item:last-child:not(:first-child) { - border-top-right-radius: var(--bs-list-group-border-radius); - border-bottom-left-radius: 0 -} - -.list-group-horizontal>.list-group-item.active { - margin-top: 0 -} - -.list-group-horizontal>.list-group-item+.list-group-item { - border-top-width: var(--bs-list-group-border-width); - border-left-width: 0 -} - -.list-group-horizontal>.list-group-item+.list-group-item.active { - margin-left: calc(-1 * var(--bs-list-group-border-width)); - border-left-width: var(--bs-list-group-border-width) -} - -@media (min-width: 576px) { - .list-group-horizontal-sm { - flex-direction: row; - -webkit-flex-direction: row - } - - .list-group-horizontal-sm>.list-group-item:first-child:not(:last-child) { - border-bottom-left-radius: var(--bs-list-group-border-radius); - border-top-right-radius: 0 - } - - .list-group-horizontal-sm>.list-group-item:last-child:not(:first-child) { - border-top-right-radius: var(--bs-list-group-border-radius); - border-bottom-left-radius: 0 - } - - .list-group-horizontal-sm>.list-group-item.active { - margin-top: 0 - } - - .list-group-horizontal-sm>.list-group-item+.list-group-item { - border-top-width: var(--bs-list-group-border-width); - border-left-width: 0 - } - - .list-group-horizontal-sm>.list-group-item+.list-group-item.active { - margin-left: calc(-1 * var(--bs-list-group-border-width)); - border-left-width: var(--bs-list-group-border-width) - } -} - -@media (min-width: 768px) { - .list-group-horizontal-md { - flex-direction: row; - -webkit-flex-direction: row - } - - .list-group-horizontal-md>.list-group-item:first-child:not(:last-child) { - border-bottom-left-radius: var(--bs-list-group-border-radius); - border-top-right-radius: 0 - } - - .list-group-horizontal-md>.list-group-item:last-child:not(:first-child) { - border-top-right-radius: var(--bs-list-group-border-radius); - border-bottom-left-radius: 0 - } - - .list-group-horizontal-md>.list-group-item.active { - margin-top: 0 - } - - .list-group-horizontal-md>.list-group-item+.list-group-item { - border-top-width: var(--bs-list-group-border-width); - border-left-width: 0 - } - - .list-group-horizontal-md>.list-group-item+.list-group-item.active { - margin-left: calc(-1 * var(--bs-list-group-border-width)); - border-left-width: var(--bs-list-group-border-width) - } -} - -@media (min-width: 992px) { - .list-group-horizontal-lg { - flex-direction: row; - -webkit-flex-direction: row - } - - .list-group-horizontal-lg>.list-group-item:first-child:not(:last-child) { - border-bottom-left-radius: var(--bs-list-group-border-radius); - border-top-right-radius: 0 - } - - .list-group-horizontal-lg>.list-group-item:last-child:not(:first-child) { - border-top-right-radius: var(--bs-list-group-border-radius); - border-bottom-left-radius: 0 - } - - .list-group-horizontal-lg>.list-group-item.active { - margin-top: 0 - } - - .list-group-horizontal-lg>.list-group-item+.list-group-item { - border-top-width: var(--bs-list-group-border-width); - border-left-width: 0 - } - - .list-group-horizontal-lg>.list-group-item+.list-group-item.active { - margin-left: calc(-1 * var(--bs-list-group-border-width)); - border-left-width: var(--bs-list-group-border-width) - } -} - -@media (min-width: 1200px) { - .list-group-horizontal-xl { - flex-direction: row; - -webkit-flex-direction: row - } - - .list-group-horizontal-xl>.list-group-item:first-child:not(:last-child) { - border-bottom-left-radius: var(--bs-list-group-border-radius); - border-top-right-radius: 0 - } - - .list-group-horizontal-xl>.list-group-item:last-child:not(:first-child) { - border-top-right-radius: var(--bs-list-group-border-radius); - border-bottom-left-radius: 0 - } - - .list-group-horizontal-xl>.list-group-item.active { - margin-top: 0 - } - - .list-group-horizontal-xl>.list-group-item+.list-group-item { - border-top-width: var(--bs-list-group-border-width); - border-left-width: 0 - } - - .list-group-horizontal-xl>.list-group-item+.list-group-item.active { - margin-left: calc(-1 * var(--bs-list-group-border-width)); - border-left-width: var(--bs-list-group-border-width) - } -} - -@media (min-width: 1400px) { - .list-group-horizontal-xxl { - flex-direction: row; - -webkit-flex-direction: row - } - - .list-group-horizontal-xxl>.list-group-item:first-child:not(:last-child) { - border-bottom-left-radius: var(--bs-list-group-border-radius); - border-top-right-radius: 0 - } - - .list-group-horizontal-xxl>.list-group-item:last-child:not(:first-child) { - border-top-right-radius: var(--bs-list-group-border-radius); - border-bottom-left-radius: 0 - } - - .list-group-horizontal-xxl>.list-group-item.active { - margin-top: 0 - } - - .list-group-horizontal-xxl>.list-group-item+.list-group-item { - border-top-width: var(--bs-list-group-border-width); - border-left-width: 0 - } - - .list-group-horizontal-xxl>.list-group-item+.list-group-item.active { - margin-left: calc(-1 * var(--bs-list-group-border-width)); - border-left-width: var(--bs-list-group-border-width) - } -} - -.list-group-flush { - border-radius: 0 -} - -.list-group-flush>.list-group-item { - border-width: 0 0 var(--bs-list-group-border-width) -} - -.list-group-flush>.list-group-item:last-child { - border-bottom-width: 0 -} - -.list-group-item-default { - color: #55544f; - background-color: #e8e8e6 -} - -.list-group-item-default.list-group-item-action:hover, -.list-group-item-default.list-group-item-action:focus { - color: #55544f; - background-color: #d1d1cf -} - -.list-group-item-default.list-group-item-action.active { - color: #fff; - background-color: #55544f; - border-color: #55544f -} - -.list-group-item-primary { - color: #1e3852; - background-color: #d6dfe7 -} - -.list-group-item-primary.list-group-item-action:hover, -.list-group-item-primary.list-group-item-action:focus { - color: #1e3852; - background-color: #c1c9d0 -} - -.list-group-item-primary.list-group-item-action.active { - color: #fff; - background-color: #1e3852; - border-color: #1e3852 -} - -.list-group-item-secondary { - color: #55544f; - background-color: #e8e8e6 -} - -.list-group-item-secondary.list-group-item-action:hover, -.list-group-item-secondary.list-group-item-action:focus { - color: #55544f; - background-color: #d1d1cf -} - -.list-group-item-secondary.list-group-item-action.active { - color: #fff; - background-color: #55544f; - border-color: #55544f -} - -.list-group-item-success { - color: #58762d; - background-color: #e9f3db -} - -.list-group-item-success.list-group-item-action:hover, -.list-group-item-success.list-group-item-action:focus { - color: #58762d; - background-color: #d2dbc5 -} - -.list-group-item-success.list-group-item-action.active { - color: #fff; - background-color: #58762d; - border-color: #58762d -} - -.list-group-item-info { - color: #196786; - background-color: #d4eef9 -} - -.list-group-item-info.list-group-item-action:hover, -.list-group-item-info.list-group-item-action:focus { - color: #196786; - background-color: #bfd6e0 -} - -.list-group-item-info.list-group-item-action.active { - color: #fff; - background-color: #196786; - border-color: #196786 -} - -.list-group-item-warning { - color: #924a24; - background-color: #fde5d8 -} - -.list-group-item-warning.list-group-item-action:hover, -.list-group-item-warning.list-group-item-action:focus { - color: #924a24; - background-color: #e4cec2 -} - -.list-group-item-warning.list-group-item-action.active { - color: #fff; - background-color: #924a24; - border-color: #924a24 -} - -.list-group-item-danger { - color: #82322f; - background-color: #f7dddc -} - -.list-group-item-danger.list-group-item-action:hover, -.list-group-item-danger.list-group-item-action:focus { - color: #82322f; - background-color: #dec7c6 -} - -.list-group-item-danger.list-group-item-action.active { - color: #fff; - background-color: #82322f; - border-color: #82322f -} - -.list-group-item-light { - color: #959390; - background-color: #fefdfc -} - -.list-group-item-light.list-group-item-action:hover, -.list-group-item-light.list-group-item-action:focus { - color: #959390; - background-color: #e5e4e3 -} - -.list-group-item-light.list-group-item-action.active { - color: #fff; - background-color: #959390; - border-color: #959390 -} - -.list-group-item-dark { - color: #252623; - background-color: #d8d9d8 -} - -.list-group-item-dark.list-group-item-action:hover, -.list-group-item-dark.list-group-item-action:focus { - color: #252623; - background-color: #c2c3c2 -} - -.list-group-item-dark.list-group-item-action.active { - color: #fff; - background-color: #252623; - border-color: #252623 -} - -.btn-close { - box-sizing: content-box; - width: 1em; - height: 1em; - padding: .25em .25em; - color: #fff; - background: transparent url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 16 16' fill='%23fff'%3e%3cpath d='M.293.293a1 1 0 0 1 1.414 0L8 6.586 14.293.293a1 1 0 1 1 1.414 1.414L9.414 8l6.293 6.293a1 1 0 0 1-1.414 1.414L8 9.414l-6.293 6.293a1 1 0 0 1-1.414-1.414L6.586 8 .293 1.707a1 1 0 0 1 0-1.414z'/%3e%3c/svg%3e") center/1em auto no-repeat; - border: 0; - border-radius: .375rem; - opacity: .8 -} - -.btn-close:hover { - color: #fff; - text-decoration: none; - opacity: 1 -} - -.btn-close:focus { - outline: 0; - box-shadow: 0 0 0 .25rem rgba(50, 93, 136, 0.25); - opacity: 1 -} - -.btn-close:disabled, -.btn-close.disabled { - pointer-events: none; - user-select: none; - -webkit-user-select: none; - -moz-user-select: none; - -ms-user-select: none; - -o-user-select: none; - opacity: .25 -} - -.btn-close-white { - filter: invert(1) grayscale(100%) brightness(200%) -} - -.toast { - --bs-toast-zindex: 1090; - --bs-toast-padding-x: .75rem; - --bs-toast-padding-y: .5rem; - --bs-toast-spacing: 1.5rem; - --bs-toast-max-width: 350px; - --bs-toast-font-size: .875rem; - --bs-toast-color: ; - --bs-toast-bg: rgba(255, 255, 255, 0.85); - --bs-toast-border-width: 1px; - --bs-toast-border-color: var(--bs-border-color-translucent); - --bs-toast-border-radius: .375rem; - --bs-toast-box-shadow: 0 0.5rem 1rem rgba(0, 0, 0, 0.15); - --bs-toast-header-color: #8e8c84; - --bs-toast-header-bg: rgba(255, 255, 255, 0.85); - --bs-toast-header-border-color: rgba(0, 0, 0, 0.05); - width: var(--bs-toast-max-width); - max-width: 100%; - font-size: var(--bs-toast-font-size); - color: var(--bs-toast-color); - pointer-events: auto; - background-color: var(--bs-toast-bg); - background-clip: padding-box; - border: var(--bs-toast-border-width) solid var(--bs-toast-border-color); - box-shadow: var(--bs-toast-box-shadow); - border-radius: var(--bs-toast-border-radius) -} - -.toast.showing { - opacity: 0 -} - -.toast:not(.show):not(.in) { - display: none -} - -.toast-container { - --bs-toast-zindex: 1090; - position: absolute; - z-index: var(--bs-toast-zindex); - width: max-content; - width: -webkit-max-content; - width: -moz-max-content; - width: -ms-max-content; - width: -o-max-content; - max-width: 100%; - pointer-events: none -} - -.toast-container>:not(:last-child) { - margin-bottom: var(--bs-toast-spacing) -} - -.toast-header { - display: flex; - display: -webkit-flex; - align-items: center; - -webkit-align-items: center; - padding: var(--bs-toast-padding-y) var(--bs-toast-padding-x); - color: var(--bs-toast-header-color); - background-color: var(--bs-toast-header-bg); - background-clip: padding-box; - border-bottom: var(--bs-toast-border-width) solid var(--bs-toast-header-border-color); - border-top-left-radius: calc(var(--bs-toast-border-radius) - var(--bs-toast-border-width)); - border-top-right-radius: calc(var(--bs-toast-border-radius) - var(--bs-toast-border-width)) -} - -.toast-header .btn-close { - margin-right: calc(-.5 * var(--bs-toast-padding-x)); - margin-left: var(--bs-toast-padding-x) -} - -.toast-body { - padding: var(--bs-toast-padding-x); - word-wrap: break-word -} - -.modal { - --bs-modal-zindex: 1055; - --bs-modal-width: 500px; - --bs-modal-padding: 1rem; - --bs-modal-margin: .5rem; - --bs-modal-color: ; - --bs-modal-bg: #fff; - --bs-modal-border-color: #dfd7ca; - --bs-modal-border-width: 1px; - --bs-modal-border-radius: .5rem; - --bs-modal-box-shadow: 0 0.125rem 0.25rem rgba(0, 0, 0, 0.075); - --bs-modal-inner-border-radius: calc(.5rem - 1px); - --bs-modal-header-padding-x: 1rem; - --bs-modal-header-padding-y: 1rem; - --bs-modal-header-padding: 1rem 1rem; - --bs-modal-header-border-color: #dfd7ca; - --bs-modal-header-border-width: 1px; - --bs-modal-title-line-height: 1.5; - --bs-modal-footer-gap: .5rem; - --bs-modal-footer-bg: ; - --bs-modal-footer-border-color: #dfd7ca; - --bs-modal-footer-border-width: 1px; - position: fixed; - top: 0; - left: 0; - z-index: var(--bs-modal-zindex); - display: none; - width: 100%; - height: 100%; - overflow-x: hidden; - overflow-y: auto; - outline: 0 -} - -.modal-dialog { - position: relative; - width: auto; - margin: var(--bs-modal-margin); - pointer-events: none -} - -.modal.fade .modal-dialog { - transform: translate(0, -50px) -} - -.modal.show .modal-dialog, -.modal.in .modal-dialog { - transform: none -} - -.modal.modal-static .modal-dialog { - transform: scale(1.02) -} - -.modal-dialog-scrollable { - height: calc(100% - var(--bs-modal-margin) * 2) -} - -.modal-dialog-scrollable .modal-content { - max-height: 100%; - overflow: hidden -} - -.modal-dialog-scrollable .modal-body { - overflow-y: auto -} - -.modal-dialog-centered { - display: flex; - display: -webkit-flex; - align-items: center; - -webkit-align-items: center; - min-height: calc(100% - var(--bs-modal-margin) * 2) -} - -.modal-content { - position: relative; - display: flex; - display: -webkit-flex; - flex-direction: column; - -webkit-flex-direction: column; - width: 100%; - color: var(--bs-modal-color); - pointer-events: auto; - background-color: var(--bs-modal-bg); - background-clip: padding-box; - border: var(--bs-modal-border-width) solid var(--bs-modal-border-color); - border-radius: var(--bs-modal-border-radius); - outline: 0 -} - -.modal-backdrop { - --bs-backdrop-zindex: 1050; - --bs-backdrop-bg: #000; - --bs-backdrop-opacity: .5; - position: fixed; - top: 0; - left: 0; - z-index: var(--bs-backdrop-zindex); - width: 100vw; - height: 100vh; - background-color: var(--bs-backdrop-bg) -} - -.modal-backdrop.fade { - opacity: 0 -} - -.modal-backdrop.show, -.modal-backdrop.in { - opacity: var(--bs-backdrop-opacity) -} - -.modal-header { - display: flex; - display: -webkit-flex; - flex-shrink: 0; - -webkit-flex-shrink: 0; - align-items: center; - -webkit-align-items: center; - justify-content: space-between; - -webkit-justify-content: space-between; - padding: var(--bs-modal-header-padding); - border-bottom: var(--bs-modal-header-border-width) solid var(--bs-modal-header-border-color); - border-top-left-radius: var(--bs-modal-inner-border-radius); - border-top-right-radius: var(--bs-modal-inner-border-radius) -} - -.modal-header .btn-close { - padding: calc(var(--bs-modal-header-padding-y) * .5) calc(var(--bs-modal-header-padding-x) * .5); - margin: calc(-.5 * var(--bs-modal-header-padding-y)) calc(-.5 * var(--bs-modal-header-padding-x)) calc(-.5 * var(--bs-modal-header-padding-y)) auto -} - -.modal-title { - margin-bottom: 0; - line-height: var(--bs-modal-title-line-height) -} - -.modal-body { - position: relative; - flex: 1 1 auto; - -webkit-flex: 1 1 auto; - padding: var(--bs-modal-padding) -} - -.modal-footer { - display: flex; - display: -webkit-flex; - flex-shrink: 0; - -webkit-flex-shrink: 0; - flex-wrap: wrap; - -webkit-flex-wrap: wrap; - align-items: center; - -webkit-align-items: center; - justify-content: flex-end; - -webkit-justify-content: flex-end; - padding: calc(var(--bs-modal-padding) - var(--bs-modal-footer-gap) * .5); - background-color: var(--bs-modal-footer-bg); - border-top: var(--bs-modal-footer-border-width) solid var(--bs-modal-footer-border-color); - border-bottom-right-radius: var(--bs-modal-inner-border-radius); - border-bottom-left-radius: var(--bs-modal-inner-border-radius) -} - -.modal-footer>* { - margin: calc(var(--bs-modal-footer-gap) * .5) -} - -@media (min-width: 576px) { - .modal { - --bs-modal-margin: 1.75rem; - --bs-modal-box-shadow: 0 0.5rem 1rem rgba(0, 0, 0, 0.15) - } - - .modal-dialog { - max-width: var(--bs-modal-width); - margin-right: auto; - margin-left: auto - } - - .modal-sm { - --bs-modal-width: 300px - } -} - -@media (min-width: 992px) { - - .modal-lg, - .modal-xl { - --bs-modal-width: 800px - } -} - -@media (min-width: 1200px) { - .modal-xl { - --bs-modal-width: 1140px - } -} - -.modal-fullscreen { - width: 100vw; - max-width: none; - height: 100%; - margin: 0 -} - -.modal-fullscreen .modal-content { - height: 100%; - border: 0; - border-radius: 0 -} - -.modal-fullscreen .modal-header, -.modal-fullscreen .modal-footer { - border-radius: 0 -} - -.modal-fullscreen .modal-body { - overflow-y: auto -} - -@media (max-width: 575.98px) { - .modal-fullscreen-sm-down { - width: 100vw; - max-width: none; - height: 100%; - margin: 0 - } - - .modal-fullscreen-sm-down .modal-content { - height: 100%; - border: 0; - border-radius: 0 - } - - .modal-fullscreen-sm-down .modal-header, - .modal-fullscreen-sm-down .modal-footer { - border-radius: 0 - } - - .modal-fullscreen-sm-down .modal-body { - overflow-y: auto - } -} - -@media (max-width: 767.98px) { - .modal-fullscreen-md-down { - width: 100vw; - max-width: none; - height: 100%; - margin: 0 - } - - .modal-fullscreen-md-down .modal-content { - height: 100%; - border: 0; - border-radius: 0 - } - - .modal-fullscreen-md-down .modal-header, - .modal-fullscreen-md-down .modal-footer { - border-radius: 0 - } - - .modal-fullscreen-md-down .modal-body { - overflow-y: auto - } -} - -@media (max-width: 991.98px) { - .modal-fullscreen-lg-down { - width: 100vw; - max-width: none; - height: 100%; - margin: 0 - } - - .modal-fullscreen-lg-down .modal-content { - height: 100%; - border: 0; - border-radius: 0 - } - - .modal-fullscreen-lg-down .modal-header, - .modal-fullscreen-lg-down .modal-footer { - border-radius: 0 - } - - .modal-fullscreen-lg-down .modal-body { - overflow-y: auto - } -} - -@media (max-width: 1199.98px) { - .modal-fullscreen-xl-down { - width: 100vw; - max-width: none; - height: 100%; - margin: 0 - } - - .modal-fullscreen-xl-down .modal-content { - height: 100%; - border: 0; - border-radius: 0 - } - - .modal-fullscreen-xl-down .modal-header, - .modal-fullscreen-xl-down .modal-footer { - border-radius: 0 - } - - .modal-fullscreen-xl-down .modal-body { - overflow-y: auto - } -} - -@media (max-width: 1399.98px) { - .modal-fullscreen-xxl-down { - width: 100vw; - max-width: none; - height: 100%; - margin: 0 - } - - .modal-fullscreen-xxl-down .modal-content { - height: 100%; - border: 0; - border-radius: 0 - } - - .modal-fullscreen-xxl-down .modal-header, - .modal-fullscreen-xxl-down .modal-footer { - border-radius: 0 - } - - .modal-fullscreen-xxl-down .modal-body { - overflow-y: auto - } -} - -.tooltip { - --bs-tooltip-zindex: 1080; - --bs-tooltip-max-width: 200px; - --bs-tooltip-padding-x: .5rem; - --bs-tooltip-padding-y: .25rem; - --bs-tooltip-margin: ; - --bs-tooltip-font-size: .875rem; - --bs-tooltip-color: #fff; - --bs-tooltip-bg: #000; - --bs-tooltip-border-radius: .375rem; - --bs-tooltip-opacity: .9; - --bs-tooltip-arrow-width: .8rem; - --bs-tooltip-arrow-height: .4rem; - z-index: var(--bs-tooltip-zindex); - display: block; - padding: var(--bs-tooltip-arrow-height); - margin: var(--bs-tooltip-margin); - font-family: var(--bs-font-sans-serif); - font-style: normal; - font-weight: 400; - line-height: 1.5; - text-align: left; - text-align: start; - text-decoration: none; - text-shadow: none; - text-transform: none; - letter-spacing: normal; - word-break: normal; - white-space: normal; - word-spacing: normal; - line-break: auto; - font-size: var(--bs-tooltip-font-size); - word-wrap: break-word; - opacity: 0 -} - -.tooltip.show, -.tooltip.in { - opacity: var(--bs-tooltip-opacity) -} - -.tooltip .tooltip-arrow { - display: block; - width: var(--bs-tooltip-arrow-width); - height: var(--bs-tooltip-arrow-height) -} - -.tooltip .tooltip-arrow::before { - position: absolute; - content: ""; - border-color: transparent; - border-style: solid -} - -.bs-tooltip-top .tooltip-arrow, -.bs-tooltip-auto[data-popper-placement^="top"] .tooltip-arrow { - bottom: 0 -} - -.bs-tooltip-top .tooltip-arrow::before, -.bs-tooltip-auto[data-popper-placement^="top"] .tooltip-arrow::before { - top: -1px; - border-width: var(--bs-tooltip-arrow-height) calc(var(--bs-tooltip-arrow-width) * .5) 0; - border-top-color: var(--bs-tooltip-bg) -} - -.bs-tooltip-end .tooltip-arrow, -.bs-tooltip-auto[data-popper-placement^="right"] .tooltip-arrow { - left: 0; - width: var(--bs-tooltip-arrow-height); - height: var(--bs-tooltip-arrow-width) -} - -.bs-tooltip-end .tooltip-arrow::before, -.bs-tooltip-auto[data-popper-placement^="right"] .tooltip-arrow::before { - right: -1px; - border-width: calc(var(--bs-tooltip-arrow-width) * .5) var(--bs-tooltip-arrow-height) calc(var(--bs-tooltip-arrow-width) * .5) 0; - border-right-color: var(--bs-tooltip-bg) -} - -.bs-tooltip-bottom .tooltip-arrow, -.bs-tooltip-auto[data-popper-placement^="bottom"] .tooltip-arrow { - top: 0 -} - -.bs-tooltip-bottom .tooltip-arrow::before, -.bs-tooltip-auto[data-popper-placement^="bottom"] .tooltip-arrow::before { - bottom: -1px; - border-width: 0 calc(var(--bs-tooltip-arrow-width) * .5) var(--bs-tooltip-arrow-height); - border-bottom-color: var(--bs-tooltip-bg) -} - -.bs-tooltip-start .tooltip-arrow, -.bs-tooltip-auto[data-popper-placement^="left"] .tooltip-arrow { - right: 0; - width: var(--bs-tooltip-arrow-height); - height: var(--bs-tooltip-arrow-width) -} - -.bs-tooltip-start .tooltip-arrow::before, -.bs-tooltip-auto[data-popper-placement^="left"] .tooltip-arrow::before { - left: -1px; - border-width: calc(var(--bs-tooltip-arrow-width) * .5) 0 calc(var(--bs-tooltip-arrow-width) * .5) var(--bs-tooltip-arrow-height); - border-left-color: var(--bs-tooltip-bg) -} - -.tooltip-inner { - max-width: var(--bs-tooltip-max-width); - padding: var(--bs-tooltip-padding-y) var(--bs-tooltip-padding-x); - color: var(--bs-tooltip-color); - text-align: center; - background-color: var(--bs-tooltip-bg); - border-radius: var(--bs-tooltip-border-radius) -} - -.popover { - --bs-popover-zindex: 1070; - --bs-popover-max-width: 276px; - --bs-popover-font-size: .875rem; - --bs-popover-bg: #fff; - --bs-popover-border-width: 1px; - --bs-popover-border-color: var(--bs-border-color-translucent); - --bs-popover-border-radius: .5rem; - --bs-popover-inner-border-radius: calc(.5rem - 1px); - --bs-popover-box-shadow: 0 0.5rem 1rem rgba(0, 0, 0, 0.15); - --bs-popover-header-padding-x: 1rem; - --bs-popover-header-padding-y: .5rem; - --bs-popover-header-font-size: 1rem; - --bs-popover-header-color: ; - --bs-popover-header-bg: #f8f5f0; - --bs-popover-body-padding-x: 1rem; - --bs-popover-body-padding-y: 1rem; - --bs-popover-body-color: #3e3f3a; - --bs-popover-arrow-width: 1rem; - --bs-popover-arrow-height: .5rem; - --bs-popover-arrow-border: var(--bs-popover-border-color); - z-index: var(--bs-popover-zindex); - display: block; - max-width: var(--bs-popover-max-width); - font-family: var(--bs-font-sans-serif); - font-style: normal; - font-weight: 400; - line-height: 1.5; - text-align: left; - text-align: start; - text-decoration: none; - text-shadow: none; - text-transform: none; - letter-spacing: normal; - word-break: normal; - white-space: normal; - word-spacing: normal; - line-break: auto; - font-size: var(--bs-popover-font-size); - word-wrap: break-word; - background-color: var(--bs-popover-bg); - background-clip: padding-box; - border: var(--bs-popover-border-width) solid var(--bs-popover-border-color); - border-radius: var(--bs-popover-border-radius) -} - -.popover .popover-arrow { - display: block; - width: var(--bs-popover-arrow-width); - height: var(--bs-popover-arrow-height) -} - -.popover .popover-arrow::before, -.popover .popover-arrow::after { - position: absolute; - display: block; - content: ""; - border-color: transparent; - border-style: solid; - border-width: 0 -} - -.bs-popover-top>.popover-arrow, -.bs-popover-auto[data-popper-placement^="top"]>.popover-arrow { - bottom: calc(-1 * (var(--bs-popover-arrow-height)) - var(--bs-popover-border-width)) -} - -.bs-popover-top>.popover-arrow::before, -.bs-popover-auto[data-popper-placement^="top"]>.popover-arrow::before, -.bs-popover-top>.popover-arrow::after, -.bs-popover-auto[data-popper-placement^="top"]>.popover-arrow::after { - border-width: var(--bs-popover-arrow-height) calc(var(--bs-popover-arrow-width) * .5) 0 -} - -.bs-popover-top>.popover-arrow::before, -.bs-popover-auto[data-popper-placement^="top"]>.popover-arrow::before { - bottom: 0; - border-top-color: var(--bs-popover-arrow-border) -} - -.bs-popover-top>.popover-arrow::after, -.bs-popover-auto[data-popper-placement^="top"]>.popover-arrow::after { - bottom: var(--bs-popover-border-width); - border-top-color: var(--bs-popover-bg) -} - -.bs-popover-end>.popover-arrow, -.bs-popover-auto[data-popper-placement^="right"]>.popover-arrow { - left: calc(-1 * (var(--bs-popover-arrow-height)) - var(--bs-popover-border-width)); - width: var(--bs-popover-arrow-height); - height: var(--bs-popover-arrow-width) -} - -.bs-popover-end>.popover-arrow::before, -.bs-popover-auto[data-popper-placement^="right"]>.popover-arrow::before, -.bs-popover-end>.popover-arrow::after, -.bs-popover-auto[data-popper-placement^="right"]>.popover-arrow::after { - border-width: calc(var(--bs-popover-arrow-width) * .5) var(--bs-popover-arrow-height) calc(var(--bs-popover-arrow-width) * .5) 0 -} - -.bs-popover-end>.popover-arrow::before, -.bs-popover-auto[data-popper-placement^="right"]>.popover-arrow::before { - left: 0; - border-right-color: var(--bs-popover-arrow-border) -} - -.bs-popover-end>.popover-arrow::after, -.bs-popover-auto[data-popper-placement^="right"]>.popover-arrow::after { - left: var(--bs-popover-border-width); - border-right-color: var(--bs-popover-bg) -} - -.bs-popover-bottom>.popover-arrow, -.bs-popover-auto[data-popper-placement^="bottom"]>.popover-arrow { - top: calc(-1 * (var(--bs-popover-arrow-height)) - var(--bs-popover-border-width)) -} - -.bs-popover-bottom>.popover-arrow::before, -.bs-popover-auto[data-popper-placement^="bottom"]>.popover-arrow::before, -.bs-popover-bottom>.popover-arrow::after, -.bs-popover-auto[data-popper-placement^="bottom"]>.popover-arrow::after { - border-width: 0 calc(var(--bs-popover-arrow-width) * .5) var(--bs-popover-arrow-height) -} - -.bs-popover-bottom>.popover-arrow::before, -.bs-popover-auto[data-popper-placement^="bottom"]>.popover-arrow::before { - top: 0; - border-bottom-color: var(--bs-popover-arrow-border) -} - -.bs-popover-bottom>.popover-arrow::after, -.bs-popover-auto[data-popper-placement^="bottom"]>.popover-arrow::after { - top: var(--bs-popover-border-width); - border-bottom-color: var(--bs-popover-bg) -} - -.bs-popover-bottom .popover-header::before, -.bs-popover-auto[data-popper-placement^="bottom"] .popover-header::before { - position: absolute; - top: 0; - left: 50%; - display: block; - width: var(--bs-popover-arrow-width); - margin-left: calc(-.5 * var(--bs-popover-arrow-width)); - content: ""; - border-bottom: var(--bs-popover-border-width) solid var(--bs-popover-header-bg) -} - -.bs-popover-start>.popover-arrow, -.bs-popover-auto[data-popper-placement^="left"]>.popover-arrow { - right: calc(-1 * (var(--bs-popover-arrow-height)) - var(--bs-popover-border-width)); - width: var(--bs-popover-arrow-height); - height: var(--bs-popover-arrow-width) -} - -.bs-popover-start>.popover-arrow::before, -.bs-popover-auto[data-popper-placement^="left"]>.popover-arrow::before, -.bs-popover-start>.popover-arrow::after, -.bs-popover-auto[data-popper-placement^="left"]>.popover-arrow::after { - border-width: calc(var(--bs-popover-arrow-width) * .5) 0 calc(var(--bs-popover-arrow-width) * .5) var(--bs-popover-arrow-height) -} - -.bs-popover-start>.popover-arrow::before, -.bs-popover-auto[data-popper-placement^="left"]>.popover-arrow::before { - right: 0; - border-left-color: var(--bs-popover-arrow-border) -} - -.bs-popover-start>.popover-arrow::after, -.bs-popover-auto[data-popper-placement^="left"]>.popover-arrow::after { - right: var(--bs-popover-border-width); - border-left-color: var(--bs-popover-bg) -} - -.popover-header { - padding: var(--bs-popover-header-padding-y) var(--bs-popover-header-padding-x); - margin-bottom: 0; - font-size: var(--bs-popover-header-font-size); - color: var(--bs-popover-header-color); - background-color: var(--bs-popover-header-bg); - border-bottom: var(--bs-popover-border-width) solid var(--bs-popover-border-color); - border-top-left-radius: var(--bs-popover-inner-border-radius); - border-top-right-radius: var(--bs-popover-inner-border-radius) -} - -.popover-header:empty { - display: none -} - -.popover-body { - padding: var(--bs-popover-body-padding-y) var(--bs-popover-body-padding-x); - color: var(--bs-popover-body-color) -} - -.carousel { - position: relative -} - -.carousel.pointer-event { - touch-action: pan-y; - -webkit-touch-action: pan-y; - -moz-touch-action: pan-y; - -ms-touch-action: pan-y; - -o-touch-action: pan-y -} - -.carousel-inner { - position: relative; - width: 100%; - overflow: hidden -} - -.carousel-inner::after { - display: block; - clear: both; - content: "" -} - -.carousel-item { - position: relative; - display: none; - float: left; - width: 100%; - margin-right: -100%; - backface-visibility: hidden; - -webkit-backface-visibility: hidden; - -moz-backface-visibility: hidden; - -ms-backface-visibility: hidden; - -o-backface-visibility: hidden -} - -.carousel-item.active, -.carousel-item-next, -.carousel-item-prev { - display: block -} - -.carousel-item-next:not(.carousel-item-start), -.active.carousel-item-end { - transform: translateX(100%) -} - -.carousel-item-prev:not(.carousel-item-end), -.active.carousel-item-start { - transform: translateX(-100%) -} - -.carousel-fade .carousel-item { - opacity: 0; - transition-property: opacity; - transform: none -} - -.carousel-fade .carousel-item.active, -.carousel-fade .carousel-item-next.carousel-item-start, -.carousel-fade .carousel-item-prev.carousel-item-end { - z-index: 1; - opacity: 1 -} - -.carousel-fade .active.carousel-item-start, -.carousel-fade .active.carousel-item-end { - z-index: 0; - opacity: 0 -} - -.carousel-control-prev, -.carousel-control-next { - position: absolute; - top: 0; - bottom: 0; - z-index: 1; - display: flex; - display: -webkit-flex; - align-items: center; - -webkit-align-items: center; - justify-content: center; - -webkit-justify-content: center; - width: 15%; - padding: 0; - color: #fff; - text-align: center; - background: none; - border: 0; - opacity: .5 -} - -.carousel-control-prev:hover, -.carousel-control-prev:focus, -.carousel-control-next:hover, -.carousel-control-next:focus { - color: #fff; - text-decoration: none; - outline: 0; - opacity: .9 -} - -.carousel-control-prev { - left: 0; - background-image: linear-gradient(90deg, rgba(0, 0, 0, 0.25), rgba(0, 0, 0, 0.001)) -} - -.carousel-control-next { - right: 0; - background-image: linear-gradient(270deg, rgba(0, 0, 0, 0.25), rgba(0, 0, 0, 0.001)) -} - -.carousel-control-prev-icon, -.carousel-control-next-icon { - display: inline-block; - width: 2rem; - height: 2rem; - background-repeat: no-repeat; - background-position: 50%; - background-size: 100% 100% -} - -.carousel-control-prev-icon { - background-image: url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 16 16' fill='%23fff'%3e%3cpath d='M11.354 1.646a.5.5 0 0 1 0 .708L5.707 8l5.647 5.646a.5.5 0 0 1-.708.708l-6-6a.5.5 0 0 1 0-.708l6-6a.5.5 0 0 1 .708 0z'/%3e%3c/svg%3e") -} - -.carousel-control-next-icon { - background-image: url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 16 16' fill='%23fff'%3e%3cpath d='M4.646 1.646a.5.5 0 0 1 .708 0l6 6a.5.5 0 0 1 0 .708l-6 6a.5.5 0 0 1-.708-.708L10.293 8 4.646 2.354a.5.5 0 0 1 0-.708z'/%3e%3c/svg%3e") -} - -.carousel-indicators { - position: absolute; - right: 0; - bottom: 0; - left: 0; - z-index: 2; - display: flex; - display: -webkit-flex; - justify-content: center; - -webkit-justify-content: center; - padding: 0; - margin-right: 15%; - margin-bottom: 1rem; - margin-left: 15%; - list-style: none -} - -.carousel-indicators [data-bs-target] { - box-sizing: content-box; - flex: 0 1 auto; - -webkit-flex: 0 1 auto; - width: 30px; - height: 3px; - padding: 0; - margin-right: 3px; - margin-left: 3px; - text-indent: -999px; - cursor: pointer; - background-color: #fff; - background-clip: padding-box; - border: 0; - border-top: 10px solid transparent; - border-bottom: 10px solid transparent; - opacity: .5 -} - -.carousel-indicators .active { - opacity: 1 -} - -.carousel-caption { - position: absolute; - right: 15%; - bottom: 1.25rem; - left: 15%; - padding-top: 1.25rem; - padding-bottom: 1.25rem; - color: #fff; - text-align: center -} - -.carousel-dark .carousel-control-prev-icon, -.carousel-dark .carousel-control-next-icon { - filter: invert(1) grayscale(100) -} - -.carousel-dark .carousel-indicators [data-bs-target] { - background-color: #000 -} - -.carousel-dark .carousel-caption { - color: #000 -} - -.spinner-grow, -.spinner-border { - display: inline-block; - width: var(--bs-spinner-width); - height: var(--bs-spinner-height); - vertical-align: var(--bs-spinner-vertical-align); - border-radius: 50%; - animation: var(--bs-spinner-animation-speed) linear infinite var(--bs-spinner-animation-name) -} - -@keyframes spinner-border { - to { - transform: rotate(360deg) - /* rtl:ignore */ - } -} - -.spinner-border { - --bs-spinner-width: 2rem; - --bs-spinner-height: 2rem; - --bs-spinner-vertical-align: -.125em; - --bs-spinner-border-width: .25em; - --bs-spinner-animation-speed: .75s; - --bs-spinner-animation-name: spinner-border; - border: var(--bs-spinner-border-width) solid currentcolor; - border-right-color: transparent -} - -.spinner-border-sm { - --bs-spinner-width: 1rem; - --bs-spinner-height: 1rem; - --bs-spinner-border-width: .2em -} - -@keyframes spinner-grow { - 0% { - transform: scale(0) - } - - 50% { - opacity: 1; - transform: none - } -} - -.spinner-grow { - --bs-spinner-width: 2rem; - --bs-spinner-height: 2rem; - --bs-spinner-vertical-align: -.125em; - --bs-spinner-animation-speed: .75s; - --bs-spinner-animation-name: spinner-grow; - background-color: currentcolor; - opacity: 0 -} - -.spinner-grow-sm { - --bs-spinner-width: 1rem; - --bs-spinner-height: 1rem -} - -@media (prefers-reduced-motion: reduce) { - - .spinner-border, - .spinner-grow { - --bs-spinner-animation-speed: 1.5s - } -} - -.offcanvas, -.offcanvas-xxl, -.offcanvas-xl, -.offcanvas-lg, -.offcanvas-md, -.offcanvas-sm { - --bs-offcanvas-zindex: 1045; - --bs-offcanvas-width: 400px; - --bs-offcanvas-height: 30vh; - --bs-offcanvas-padding-x: 1rem; - --bs-offcanvas-padding-y: 1rem; - --bs-offcanvas-color: ; - --bs-offcanvas-bg: #fff; - --bs-offcanvas-border-width: 1px; - --bs-offcanvas-border-color: #dfd7ca; - --bs-offcanvas-box-shadow: 0 0.125rem 0.25rem rgba(0, 0, 0, 0.075) -} - -@media (max-width: 575.98px) { - .offcanvas-sm { - position: fixed; - bottom: 0; - z-index: var(--bs-offcanvas-zindex); - display: flex; - display: -webkit-flex; - flex-direction: column; - -webkit-flex-direction: column; - max-width: 100%; - color: var(--bs-offcanvas-color); - visibility: hidden; - background-color: var(--bs-offcanvas-bg); - background-clip: padding-box; - outline: 0 - } - - .offcanvas-sm.offcanvas-start { - top: 0; - left: 0; - width: var(--bs-offcanvas-width); - border-right: var(--bs-offcanvas-border-width) solid var(--bs-offcanvas-border-color); - transform: translateX(-100%) - } - - .offcanvas-sm.offcanvas-end { - top: 0; - right: 0; - width: var(--bs-offcanvas-width); - border-left: var(--bs-offcanvas-border-width) solid var(--bs-offcanvas-border-color); - transform: translateX(100%) - } - - .offcanvas-sm.offcanvas-top { - top: 0; - right: 0; - left: 0; - height: var(--bs-offcanvas-height); - max-height: 100%; - border-bottom: var(--bs-offcanvas-border-width) solid var(--bs-offcanvas-border-color); - transform: translateY(-100%) - } - - .offcanvas-sm.offcanvas-bottom { - right: 0; - left: 0; - height: var(--bs-offcanvas-height); - max-height: 100%; - border-top: var(--bs-offcanvas-border-width) solid var(--bs-offcanvas-border-color); - transform: translateY(100%) - } - - .offcanvas-sm.showing, - .offcanvas-sm.show:not(.hiding), - .offcanvas-sm.in:not(.hiding) { - transform: none - } - - .offcanvas-sm.showing, - .offcanvas-sm.hiding, - .offcanvas-sm.show, - .offcanvas-sm.in { - visibility: visible - } -} - -@media (min-width: 576px) { - .offcanvas-sm { - --bs-offcanvas-height: auto; - --bs-offcanvas-border-width: 0; - background-color: transparent !important - } - - .offcanvas-sm .offcanvas-header { - display: none - } - - .offcanvas-sm .offcanvas-body { - display: flex; - display: -webkit-flex; - flex-grow: 0; - -webkit-flex-grow: 0; - padding: 0; - overflow-y: visible; - background-color: transparent !important - } -} - -@media (max-width: 767.98px) { - .offcanvas-md { - position: fixed; - bottom: 0; - z-index: var(--bs-offcanvas-zindex); - display: flex; - display: -webkit-flex; - flex-direction: column; - -webkit-flex-direction: column; - max-width: 100%; - color: var(--bs-offcanvas-color); - visibility: hidden; - background-color: var(--bs-offcanvas-bg); - background-clip: padding-box; - outline: 0 - } - - .offcanvas-md.offcanvas-start { - top: 0; - left: 0; - width: var(--bs-offcanvas-width); - border-right: var(--bs-offcanvas-border-width) solid var(--bs-offcanvas-border-color); - transform: translateX(-100%) - } - - .offcanvas-md.offcanvas-end { - top: 0; - right: 0; - width: var(--bs-offcanvas-width); - border-left: var(--bs-offcanvas-border-width) solid var(--bs-offcanvas-border-color); - transform: translateX(100%) - } - - .offcanvas-md.offcanvas-top { - top: 0; - right: 0; - left: 0; - height: var(--bs-offcanvas-height); - max-height: 100%; - border-bottom: var(--bs-offcanvas-border-width) solid var(--bs-offcanvas-border-color); - transform: translateY(-100%) - } - - .offcanvas-md.offcanvas-bottom { - right: 0; - left: 0; - height: var(--bs-offcanvas-height); - max-height: 100%; - border-top: var(--bs-offcanvas-border-width) solid var(--bs-offcanvas-border-color); - transform: translateY(100%) - } - - .offcanvas-md.showing, - .offcanvas-md.show:not(.hiding), - .offcanvas-md.in:not(.hiding) { - transform: none - } - - .offcanvas-md.showing, - .offcanvas-md.hiding, - .offcanvas-md.show, - .offcanvas-md.in { - visibility: visible - } -} - -@media (min-width: 768px) { - .offcanvas-md { - --bs-offcanvas-height: auto; - --bs-offcanvas-border-width: 0; - background-color: transparent !important - } - - .offcanvas-md .offcanvas-header { - display: none - } - - .offcanvas-md .offcanvas-body { - display: flex; - display: -webkit-flex; - flex-grow: 0; - -webkit-flex-grow: 0; - padding: 0; - overflow-y: visible; - background-color: transparent !important - } -} - -@media (max-width: 991.98px) { - .offcanvas-lg { - position: fixed; - bottom: 0; - z-index: var(--bs-offcanvas-zindex); - display: flex; - display: -webkit-flex; - flex-direction: column; - -webkit-flex-direction: column; - max-width: 100%; - color: var(--bs-offcanvas-color); - visibility: hidden; - background-color: var(--bs-offcanvas-bg); - background-clip: padding-box; - outline: 0 - } - - .offcanvas-lg.offcanvas-start { - top: 0; - left: 0; - width: var(--bs-offcanvas-width); - border-right: var(--bs-offcanvas-border-width) solid var(--bs-offcanvas-border-color); - transform: translateX(-100%) - } - - .offcanvas-lg.offcanvas-end { - top: 0; - right: 0; - width: var(--bs-offcanvas-width); - border-left: var(--bs-offcanvas-border-width) solid var(--bs-offcanvas-border-color); - transform: translateX(100%) - } - - .offcanvas-lg.offcanvas-top { - top: 0; - right: 0; - left: 0; - height: var(--bs-offcanvas-height); - max-height: 100%; - border-bottom: var(--bs-offcanvas-border-width) solid var(--bs-offcanvas-border-color); - transform: translateY(-100%) - } - - .offcanvas-lg.offcanvas-bottom { - right: 0; - left: 0; - height: var(--bs-offcanvas-height); - max-height: 100%; - border-top: var(--bs-offcanvas-border-width) solid var(--bs-offcanvas-border-color); - transform: translateY(100%) - } - - .offcanvas-lg.showing, - .offcanvas-lg.show:not(.hiding), - .offcanvas-lg.in:not(.hiding) { - transform: none - } - - .offcanvas-lg.showing, - .offcanvas-lg.hiding, - .offcanvas-lg.show, - .offcanvas-lg.in { - visibility: visible - } -} - -@media (min-width: 992px) { - .offcanvas-lg { - --bs-offcanvas-height: auto; - --bs-offcanvas-border-width: 0; - background-color: transparent !important - } - - .offcanvas-lg .offcanvas-header { - display: none - } - - .offcanvas-lg .offcanvas-body { - display: flex; - display: -webkit-flex; - flex-grow: 0; - -webkit-flex-grow: 0; - padding: 0; - overflow-y: visible; - background-color: transparent !important - } -} - -@media (max-width: 1199.98px) { - .offcanvas-xl { - position: fixed; - bottom: 0; - z-index: var(--bs-offcanvas-zindex); - display: flex; - display: -webkit-flex; - flex-direction: column; - -webkit-flex-direction: column; - max-width: 100%; - color: var(--bs-offcanvas-color); - visibility: hidden; - background-color: var(--bs-offcanvas-bg); - background-clip: padding-box; - outline: 0 - } - - .offcanvas-xl.offcanvas-start { - top: 0; - left: 0; - width: var(--bs-offcanvas-width); - border-right: var(--bs-offcanvas-border-width) solid var(--bs-offcanvas-border-color); - transform: translateX(-100%) - } - - .offcanvas-xl.offcanvas-end { - top: 0; - right: 0; - width: var(--bs-offcanvas-width); - border-left: var(--bs-offcanvas-border-width) solid var(--bs-offcanvas-border-color); - transform: translateX(100%) - } - - .offcanvas-xl.offcanvas-top { - top: 0; - right: 0; - left: 0; - height: var(--bs-offcanvas-height); - max-height: 100%; - border-bottom: var(--bs-offcanvas-border-width) solid var(--bs-offcanvas-border-color); - transform: translateY(-100%) - } - - .offcanvas-xl.offcanvas-bottom { - right: 0; - left: 0; - height: var(--bs-offcanvas-height); - max-height: 100%; - border-top: var(--bs-offcanvas-border-width) solid var(--bs-offcanvas-border-color); - transform: translateY(100%) - } - - .offcanvas-xl.showing, - .offcanvas-xl.show:not(.hiding), - .offcanvas-xl.in:not(.hiding) { - transform: none - } - - .offcanvas-xl.showing, - .offcanvas-xl.hiding, - .offcanvas-xl.show, - .offcanvas-xl.in { - visibility: visible - } -} - -@media (min-width: 1200px) { - .offcanvas-xl { - --bs-offcanvas-height: auto; - --bs-offcanvas-border-width: 0; - background-color: transparent !important - } - - .offcanvas-xl .offcanvas-header { - display: none - } - - .offcanvas-xl .offcanvas-body { - display: flex; - display: -webkit-flex; - flex-grow: 0; - -webkit-flex-grow: 0; - padding: 0; - overflow-y: visible; - background-color: transparent !important - } -} - -@media (max-width: 1399.98px) { - .offcanvas-xxl { - position: fixed; - bottom: 0; - z-index: var(--bs-offcanvas-zindex); - display: flex; - display: -webkit-flex; - flex-direction: column; - -webkit-flex-direction: column; - max-width: 100%; - color: var(--bs-offcanvas-color); - visibility: hidden; - background-color: var(--bs-offcanvas-bg); - background-clip: padding-box; - outline: 0 - } - - .offcanvas-xxl.offcanvas-start { - top: 0; - left: 0; - width: var(--bs-offcanvas-width); - border-right: var(--bs-offcanvas-border-width) solid var(--bs-offcanvas-border-color); - transform: translateX(-100%) - } - - .offcanvas-xxl.offcanvas-end { - top: 0; - right: 0; - width: var(--bs-offcanvas-width); - border-left: var(--bs-offcanvas-border-width) solid var(--bs-offcanvas-border-color); - transform: translateX(100%) - } - - .offcanvas-xxl.offcanvas-top { - top: 0; - right: 0; - left: 0; - height: var(--bs-offcanvas-height); - max-height: 100%; - border-bottom: var(--bs-offcanvas-border-width) solid var(--bs-offcanvas-border-color); - transform: translateY(-100%) - } - - .offcanvas-xxl.offcanvas-bottom { - right: 0; - left: 0; - height: var(--bs-offcanvas-height); - max-height: 100%; - border-top: var(--bs-offcanvas-border-width) solid var(--bs-offcanvas-border-color); - transform: translateY(100%) - } - - .offcanvas-xxl.showing, - .offcanvas-xxl.show:not(.hiding), - .offcanvas-xxl.in:not(.hiding) { - transform: none - } - - .offcanvas-xxl.showing, - .offcanvas-xxl.hiding, - .offcanvas-xxl.show, - .offcanvas-xxl.in { - visibility: visible - } -} - -@media (min-width: 1400px) { - .offcanvas-xxl { - --bs-offcanvas-height: auto; - --bs-offcanvas-border-width: 0; - background-color: transparent !important - } - - .offcanvas-xxl .offcanvas-header { - display: none - } - - .offcanvas-xxl .offcanvas-body { - display: flex; - display: -webkit-flex; - flex-grow: 0; - -webkit-flex-grow: 0; - padding: 0; - overflow-y: visible; - background-color: transparent !important - } -} - -.offcanvas { - position: fixed; - bottom: 0; - z-index: var(--bs-offcanvas-zindex); - display: flex; - display: -webkit-flex; - flex-direction: column; - -webkit-flex-direction: column; - max-width: 100%; - color: var(--bs-offcanvas-color); - visibility: hidden; - background-color: var(--bs-offcanvas-bg); - background-clip: padding-box; - outline: 0 -} - -.offcanvas.offcanvas-start { - top: 0; - left: 0; - width: var(--bs-offcanvas-width); - border-right: var(--bs-offcanvas-border-width) solid var(--bs-offcanvas-border-color); - transform: translateX(-100%) -} - -.offcanvas.offcanvas-end { - top: 0; - right: 0; - width: var(--bs-offcanvas-width); - border-left: var(--bs-offcanvas-border-width) solid var(--bs-offcanvas-border-color); - transform: translateX(100%) -} - -.offcanvas.offcanvas-top { - top: 0; - right: 0; - left: 0; - height: var(--bs-offcanvas-height); - max-height: 100%; - border-bottom: var(--bs-offcanvas-border-width) solid var(--bs-offcanvas-border-color); - transform: translateY(-100%) -} - -.offcanvas.offcanvas-bottom { - right: 0; - left: 0; - height: var(--bs-offcanvas-height); - max-height: 100%; - border-top: var(--bs-offcanvas-border-width) solid var(--bs-offcanvas-border-color); - transform: translateY(100%) -} - -.offcanvas.showing, -.offcanvas.show:not(.hiding), -.offcanvas.in:not(.hiding) { - transform: none -} - -.offcanvas.showing, -.offcanvas.hiding, -.offcanvas.show, -.offcanvas.in { - visibility: visible -} - -.offcanvas-backdrop { - position: fixed; - top: 0; - left: 0; - z-index: 1040; - width: 100vw; - height: 100vh; - background-color: #000 -} - -.offcanvas-backdrop.fade { - opacity: 0 -} - -.offcanvas-backdrop.show, -.offcanvas-backdrop.in { - opacity: .5 -} - -.offcanvas-header { - display: flex; - display: -webkit-flex; - align-items: center; - -webkit-align-items: center; - justify-content: space-between; - -webkit-justify-content: space-between; - padding: var(--bs-offcanvas-padding-y) var(--bs-offcanvas-padding-x) -} - -.offcanvas-header .btn-close { - padding: calc(var(--bs-offcanvas-padding-y) * .5) calc(var(--bs-offcanvas-padding-x) * .5); - margin-top: calc(-.5 * var(--bs-offcanvas-padding-y)); - margin-right: calc(-.5 * var(--bs-offcanvas-padding-x)); - margin-bottom: calc(-.5 * var(--bs-offcanvas-padding-y)) -} - -.offcanvas-title { - margin-bottom: 0; - line-height: 1.5 -} - -.offcanvas-body { - flex-grow: 1; - -webkit-flex-grow: 1; - padding: var(--bs-offcanvas-padding-y) var(--bs-offcanvas-padding-x); - overflow-y: auto -} - -.placeholder { - display: inline-block; - min-height: 1em; - vertical-align: middle; - cursor: wait; - background-color: currentcolor; - opacity: .5 -} - -.placeholder.btn::before { - display: inline-block; - content: "" -} - -.placeholder-xs { - min-height: .6em -} - -.placeholder-sm { - min-height: .8em -} - -.placeholder-lg { - min-height: 1.2em -} - -.placeholder-glow .placeholder { - animation: placeholder-glow 2s ease-in-out infinite -} - -@keyframes placeholder-glow { - 50% { - opacity: .2 - } -} - -.placeholder-wave { - mask-image: linear-gradient(130deg, #000 55%, rgba(0, 0, 0, 0.8) 75%, #000 95%); - -webkit-mask-image: linear-gradient(130deg, #000 55%, rgba(0, 0, 0, 0.8) 75%, #000 95%); - mask-size: 200% 100%; - -webkit-mask-size: 200% 100%; - animation: placeholder-wave 2s linear infinite -} - -@keyframes placeholder-wave { - 100% { - mask-position: -200% 0%; - -webkit-mask-position: -200% 0% - } -} - -.clearfix::after { - display: block; - clear: both; - content: "" -} - -.text-bg-default { - color: #fff !important; - background-color: RGBA(142, 140, 132, var(--bs-bg-opacity, 1)) !important -} - -.text-bg-primary { - color: #fff !important; - background-color: RGBA(50, 93, 136, var(--bs-bg-opacity, 1)) !important -} - -.text-bg-secondary { - color: #fff !important; - background-color: RGBA(142, 140, 132, var(--bs-bg-opacity, 1)) !important -} - -.text-bg-success { - color: #fff !important; - background-color: RGBA(147, 197, 75, var(--bs-bg-opacity, 1)) !important -} - -.text-bg-info { - color: #fff !important; - background-color: RGBA(41, 171, 224, var(--bs-bg-opacity, 1)) !important -} - -.text-bg-warning { - color: #fff !important; - background-color: RGBA(244, 124, 60, var(--bs-bg-opacity, 1)) !important -} - -.text-bg-danger { - color: #fff !important; - background-color: RGBA(217, 83, 79, var(--bs-bg-opacity, 1)) !important -} - -.text-bg-light { - color: #000 !important; - background-color: RGBA(248, 245, 240, var(--bs-bg-opacity, 1)) !important -} - -.text-bg-dark { - color: #fff !important; - background-color: RGBA(62, 63, 58, var(--bs-bg-opacity, 1)) !important -} - -.link-default { - color: #8e8c84 !important -} - -.link-default:hover, -.link-default:focus { - color: #72706a !important -} - -.link-primary { - color: #325d88 !important -} - -.link-primary:hover, -.link-primary:focus { - color: #284a6d !important -} - -.link-secondary { - color: #8e8c84 !important -} - -.link-secondary:hover, -.link-secondary:focus { - color: #72706a !important -} - -.link-success { - color: #93c54b !important -} - -.link-success:hover, -.link-success:focus { - color: #769e3c !important -} - -.link-info { - color: #29abe0 !important -} - -.link-info:hover, -.link-info:focus { - color: #2189b3 !important -} - -.link-warning { - color: #f47c3c !important -} - -.link-warning:hover, -.link-warning:focus { - color: #c36330 !important -} - -.link-danger { - color: #d9534f !important -} - -.link-danger:hover, -.link-danger:focus { - color: #ae423f !important -} - -.link-light { - color: #f8f5f0 !important -} - -.link-light:hover, -.link-light:focus { - color: #f9f7f3 !important -} - -.link-dark { - color: #3e3f3a !important -} - -.link-dark:hover, -.link-dark:focus { - color: #32322e !important -} - -.ratio { - position: relative; - width: 100% -} - -.ratio::before { - display: block; - padding-top: var(--bs-aspect-ratio); - content: "" -} - -.ratio>* { - position: absolute; - top: 0; - left: 0; - width: 100%; - height: 100% -} - -.ratio-1x1 { - --bs-aspect-ratio: 100% -} - -.ratio-4x3 { - --bs-aspect-ratio: calc(3 / 4 * 100%) -} - -.ratio-16x9 { - --bs-aspect-ratio: calc(9 / 16 * 100%) -} - -.ratio-21x9 { - --bs-aspect-ratio: calc(9 / 21 * 100%) -} - -.fixed-top, -.navbar-fixed-top { - position: fixed; - top: 0; - right: 0; - left: 0; - z-index: 1030 -} - -.fixed-bottom, -.navbar-fixed-bottom { - position: fixed; - right: 0; - bottom: 0; - left: 0; - z-index: 1030 -} - -.sticky-top, -.navbar-sticky-top { - position: sticky; - top: 0; - z-index: 1020 -} - -.sticky-bottom { - position: sticky; - bottom: 0; - z-index: 1020 -} - -@media (min-width: 576px) { - .sticky-sm-top { - position: sticky; - top: 0; - z-index: 1020 - } - - .sticky-sm-bottom { - position: sticky; - bottom: 0; - z-index: 1020 - } -} - -@media (min-width: 768px) { - .sticky-md-top { - position: sticky; - top: 0; - z-index: 1020 - } - - .sticky-md-bottom { - position: sticky; - bottom: 0; - z-index: 1020 - } -} - -@media (min-width: 992px) { - .sticky-lg-top { - position: sticky; - top: 0; - z-index: 1020 - } - - .sticky-lg-bottom { - position: sticky; - bottom: 0; - z-index: 1020 - } -} - -@media (min-width: 1200px) { - .sticky-xl-top { - position: sticky; - top: 0; - z-index: 1020 - } - - .sticky-xl-bottom { - position: sticky; - bottom: 0; - z-index: 1020 - } -} - -@media (min-width: 1400px) { - .sticky-xxl-top { - position: sticky; - top: 0; - z-index: 1020 - } - - .sticky-xxl-bottom { - position: sticky; - bottom: 0; - z-index: 1020 - } -} - -.hstack { - display: flex; - display: -webkit-flex; - flex-direction: row; - -webkit-flex-direction: row; - align-items: center; - -webkit-align-items: center; - align-self: stretch; - -webkit-align-self: stretch -} - -.vstack { - display: flex; - display: -webkit-flex; - flex: 1 1 auto; - -webkit-flex: 1 1 auto; - flex-direction: column; - -webkit-flex-direction: column; - align-self: stretch; - -webkit-align-self: stretch -} - -.visually-hidden, -.visually-hidden-focusable:not(:focus):not(:focus-within) { - position: absolute !important; - width: 1px !important; - height: 1px !important; - padding: 0 !important; - margin: -1px !important; - overflow: hidden !important; - clip: rect(0, 0, 0, 0) !important; - white-space: nowrap !important; - border: 0 !important -} - -.stretched-link::after { - position: absolute; - top: 0; - right: 0; - bottom: 0; - left: 0; - z-index: 1; - content: "" -} - -.text-truncate { - overflow: hidden; - text-overflow: ellipsis; - white-space: nowrap -} - -.vr { - display: inline-block; - align-self: stretch; - -webkit-align-self: stretch; - width: 1px; - min-height: 1em; - background-color: currentcolor; - opacity: .25 -} - -.align-baseline { - vertical-align: baseline !important -} - -.align-top { - vertical-align: top !important -} - -.align-middle { - vertical-align: middle !important -} - -.align-bottom { - vertical-align: bottom !important -} - -.align-text-bottom { - vertical-align: text-bottom !important -} - -.align-text-top { - vertical-align: text-top !important -} - -.float-start, -.float-left { - float: left !important -} - -.float-end, -.float-right { - float: right !important -} - -.float-none { - float: none !important -} - -.opacity-0 { - opacity: 0 !important -} - -.opacity-25 { - opacity: .25 !important -} - -.opacity-50 { - opacity: .5 !important -} - -.opacity-75 { - opacity: .75 !important -} - -.opacity-100 { - opacity: 1 !important -} - -.overflow-auto { - overflow: auto !important -} - -.overflow-hidden { - overflow: hidden !important -} - -.overflow-visible { - overflow: visible !important -} - -.overflow-scroll { - overflow: scroll !important -} - -.d-inline { - display: inline !important -} - -.d-inline-block { - display: inline-block !important -} - -.d-block { - display: block !important -} - -.d-grid { - display: grid !important -} - -.d-table { - display: table !important -} - -.d-table-row { - display: table-row !important -} - -.d-table-cell { - display: table-cell !important -} - -.d-flex { - display: flex !important -} - -.d-inline-flex { - display: inline-flex !important -} - -.d-none { - display: none !important -} - -.shadow { - box-shadow: 0 0.5rem 1rem rgba(0, 0, 0, 0.15) !important -} - -.shadow-sm { - box-shadow: 0 0.125rem 0.25rem rgba(0, 0, 0, 0.075) !important -} - -.shadow-lg { - box-shadow: 0 1rem 3rem rgba(0, 0, 0, 0.175) !important -} - -.shadow-none { - box-shadow: none !important -} - -.position-static { - position: static !important -} - -.position-relative { - position: relative !important -} - -.position-absolute { - position: absolute !important -} - -.position-fixed { - position: fixed !important -} - -.position-sticky { - position: sticky !important -} - -.top-0 { - top: 0 !important -} - -.top-50 { - top: 50% !important -} - -.top-100 { - top: 100% !important -} - -.bottom-0 { - bottom: 0 !important -} - -.bottom-50 { - bottom: 50% !important -} - -.bottom-100 { - bottom: 100% !important -} - -.start-0 { - left: 0 !important -} - -.start-50 { - left: 50% !important -} - -.start-100 { - left: 100% !important -} - -.end-0 { - right: 0 !important -} - -.end-50 { - right: 50% !important -} - -.end-100 { - right: 100% !important -} - -.translate-middle { - transform: translate(-50%, -50%) !important -} - -.translate-middle-x { - transform: translateX(-50%) !important -} - -.translate-middle-y { - transform: translateY(-50%) !important -} - -.border { - border: var(--bs-border-width) var(--bs-border-style) var(--bs-border-color) !important -} - -.border-0 { - border: 0 !important -} - -.border-top { - border-top: var(--bs-border-width) var(--bs-border-style) var(--bs-border-color) !important -} - -.border-top-0 { - border-top: 0 !important -} - -.border-end { - border-right: var(--bs-border-width) var(--bs-border-style) var(--bs-border-color) !important -} - -.border-end-0 { - border-right: 0 !important -} - -.border-bottom { - border-bottom: var(--bs-border-width) var(--bs-border-style) var(--bs-border-color) !important -} - -.border-bottom-0 { - border-bottom: 0 !important -} - -.border-start { - border-left: var(--bs-border-width) var(--bs-border-style) var(--bs-border-color) !important -} - -.border-start-0 { - border-left: 0 !important -} - -.border-default { - --bs-border-opacity: 1; - border-color: rgba(var(--bs-default-rgb), var(--bs-border-opacity)) !important -} - -.border-primary { - --bs-border-opacity: 1; - border-color: rgba(var(--bs-primary-rgb), var(--bs-border-opacity)) !important -} - -.border-secondary { - --bs-border-opacity: 1; - border-color: rgba(var(--bs-secondary-rgb), var(--bs-border-opacity)) !important -} - -.border-success { - --bs-border-opacity: 1; - border-color: rgba(var(--bs-success-rgb), var(--bs-border-opacity)) !important -} - -.border-info { - --bs-border-opacity: 1; - border-color: rgba(var(--bs-info-rgb), var(--bs-border-opacity)) !important -} - -.border-warning { - --bs-border-opacity: 1; - border-color: rgba(var(--bs-warning-rgb), var(--bs-border-opacity)) !important -} - -.border-danger { - --bs-border-opacity: 1; - border-color: rgba(var(--bs-danger-rgb), var(--bs-border-opacity)) !important -} - -.border-light { - --bs-border-opacity: 1; - border-color: rgba(var(--bs-light-rgb), var(--bs-border-opacity)) !important -} - -.border-dark { - --bs-border-opacity: 1; - border-color: rgba(var(--bs-dark-rgb), var(--bs-border-opacity)) !important -} - -.border-white { - --bs-border-opacity: 1; - border-color: rgba(var(--bs-white-rgb), var(--bs-border-opacity)) !important -} - -.border-1 { - --bs-border-width: 1px -} - -.border-2 { - --bs-border-width: 2px -} - -.border-3 { - --bs-border-width: 3px -} - -.border-4 { - --bs-border-width: 4px -} - -.border-5 { - --bs-border-width: 5px -} - -.border-opacity-10 { - --bs-border-opacity: .1 -} - -.border-opacity-25 { - --bs-border-opacity: .25 -} - -.border-opacity-50 { - --bs-border-opacity: .5 -} - -.border-opacity-75 { - --bs-border-opacity: .75 -} - -.border-opacity-100 { - --bs-border-opacity: 1 -} - -.w-25 { - width: 25% !important -} - -.w-50 { - width: 50% !important -} - -.w-75 { - width: 75% !important -} - -.w-100 { - width: 100% !important -} - -.w-auto { - width: auto !important -} - -.mw-100 { - max-width: 100% !important -} - -.vw-100 { - width: 100vw !important -} - -.min-vw-100 { - min-width: 100vw !important -} - -.h-25 { - height: 25% !important -} - -.h-50 { - height: 50% !important -} - -.h-75 { - height: 75% !important -} - -.h-100 { - height: 100% !important -} - -.h-auto { - height: auto !important -} - -.mh-100 { - max-height: 100% !important -} - -.vh-100 { - height: 100vh !important -} - -.min-vh-100 { - min-height: 100vh !important -} - -.flex-fill { - flex: 1 1 auto !important -} - -.flex-row { - flex-direction: row !important -} - -.flex-column { - flex-direction: column !important -} - -.flex-row-reverse { - flex-direction: row-reverse !important -} - -.flex-column-reverse { - flex-direction: column-reverse !important -} - -.flex-grow-0 { - flex-grow: 0 !important -} - -.flex-grow-1 { - flex-grow: 1 !important -} - -.flex-shrink-0 { - flex-shrink: 0 !important -} - -.flex-shrink-1 { - flex-shrink: 1 !important -} - -.flex-wrap { - flex-wrap: wrap !important -} - -.flex-nowrap { - flex-wrap: nowrap !important -} - -.flex-wrap-reverse { - flex-wrap: wrap-reverse !important -} - -.justify-content-start { - justify-content: flex-start !important -} - -.justify-content-end { - justify-content: flex-end !important -} - -.justify-content-center { - justify-content: center !important -} - -.justify-content-between { - justify-content: space-between !important -} - -.justify-content-around { - justify-content: space-around !important -} - -.justify-content-evenly { - justify-content: space-evenly !important -} - -.align-items-start { - align-items: flex-start !important -} - -.align-items-end { - align-items: flex-end !important -} - -.align-items-center { - align-items: center !important -} - -.align-items-baseline { - align-items: baseline !important -} - -.align-items-stretch { - align-items: stretch !important -} - -.align-content-start { - align-content: flex-start !important -} - -.align-content-end { - align-content: flex-end !important -} - -.align-content-center { - align-content: center !important -} - -.align-content-between { - align-content: space-between !important -} - -.align-content-around { - align-content: space-around !important -} - -.align-content-stretch { - align-content: stretch !important -} - -.align-self-auto { - align-self: auto !important -} - -.align-self-start { - align-self: flex-start !important -} - -.align-self-end { - align-self: flex-end !important -} - -.align-self-center { - align-self: center !important -} - -.align-self-baseline { - align-self: baseline !important -} - -.align-self-stretch { - align-self: stretch !important -} - -.order-first { - order: -1 !important -} - -.order-0 { - order: 0 !important -} - -.order-1 { - order: 1 !important -} - -.order-2 { - order: 2 !important -} - -.order-3 { - order: 3 !important -} - -.order-4 { - order: 4 !important -} - -.order-5 { - order: 5 !important -} - -.order-last { - order: 6 !important -} - -.m-0 { - margin: 0 !important -} - -.m-1 { - margin: .25rem !important -} - -.m-2 { - margin: .5rem !important -} - -.m-3 { - margin: 1rem !important -} - -.m-4 { - margin: 1.5rem !important -} - -.m-5 { - margin: 3rem !important -} - -.m-auto { - margin: auto !important -} - -.mx-0 { - margin-right: 0 !important; - margin-left: 0 !important -} - -.mx-1 { - margin-right: .25rem !important; - margin-left: .25rem !important -} - -.mx-2 { - margin-right: .5rem !important; - margin-left: .5rem !important -} - -.mx-3 { - margin-right: 1rem !important; - margin-left: 1rem !important -} - -.mx-4 { - margin-right: 1.5rem !important; - margin-left: 1.5rem !important -} - -.mx-5 { - margin-right: 3rem !important; - margin-left: 3rem !important -} - -.mx-auto { - margin-right: auto !important; - margin-left: auto !important -} - -.my-0 { - margin-top: 0 !important; - margin-bottom: 0 !important -} - -.my-1 { - margin-top: .25rem !important; - margin-bottom: .25rem !important -} - -.my-2 { - margin-top: .5rem !important; - margin-bottom: .5rem !important -} - -.my-3 { - margin-top: 1rem !important; - margin-bottom: 1rem !important -} - -.my-4 { - margin-top: 1.5rem !important; - margin-bottom: 1.5rem !important -} - -.my-5 { - margin-top: 3rem !important; - margin-bottom: 3rem !important -} - -.my-auto { - margin-top: auto !important; - margin-bottom: auto !important -} - -.mt-0 { - margin-top: 0 !important -} - -.mt-1 { - margin-top: .25rem !important -} - -.mt-2 { - margin-top: .5rem !important -} - -.mt-3 { - margin-top: 1rem !important -} - -.mt-4 { - margin-top: 1.5rem !important -} - -.mt-5 { - margin-top: 3rem !important -} - -.mt-auto { - margin-top: auto !important -} - -.me-0 { - margin-right: 0 !important -} - -.me-1 { - margin-right: .25rem !important -} - -.me-2 { - margin-right: .5rem !important -} - -.me-3 { - margin-right: 1rem !important -} - -.me-4 { - margin-right: 1.5rem !important -} - -.me-5 { - margin-right: 3rem !important -} - -.me-auto { - margin-right: auto !important -} - -.mb-0 { - margin-bottom: 0 !important -} - -.mb-1 { - margin-bottom: .25rem !important -} - -.mb-2 { - margin-bottom: .5rem !important -} - -.mb-3 { - margin-bottom: 1rem !important -} - -.mb-4 { - margin-bottom: 1.5rem !important -} - -.mb-5 { - margin-bottom: 3rem !important -} - -.mb-auto { - margin-bottom: auto !important -} - -.ms-0 { - margin-left: 0 !important -} - -.ms-1 { - margin-left: .25rem !important -} - -.ms-2 { - margin-left: .5rem !important -} - -.ms-3 { - margin-left: 1rem !important -} - -.ms-4 { - margin-left: 1.5rem !important -} - -.ms-5 { - margin-left: 3rem !important -} - -.ms-auto { - margin-left: auto !important -} - -.p-0 { - padding: 0 !important -} - -.p-1 { - padding: .25rem !important -} - -.p-2 { - padding: .5rem !important -} - -.p-3 { - padding: 1rem !important -} - -.p-4 { - padding: 1.5rem !important -} - -.p-5 { - padding: 3rem !important -} - -.px-0 { - padding-right: 0 !important; - padding-left: 0 !important -} - -.px-1 { - padding-right: .25rem !important; - padding-left: .25rem !important -} - -.px-2 { - padding-right: .5rem !important; - padding-left: .5rem !important -} - -.px-3 { - padding-right: 1rem !important; - padding-left: 1rem !important -} - -.px-4 { - padding-right: 1.5rem !important; - padding-left: 1.5rem !important -} - -.px-5 { - padding-right: 3rem !important; - padding-left: 3rem !important -} - -.py-0 { - padding-top: 0 !important; - padding-bottom: 0 !important -} - -.py-1 { - padding-top: .25rem !important; - padding-bottom: .25rem !important -} - -.py-2 { - padding-top: .5rem !important; - padding-bottom: .5rem !important -} - -.py-3 { - padding-top: 1rem !important; - padding-bottom: 1rem !important -} - -.py-4 { - padding-top: 1.5rem !important; - padding-bottom: 1.5rem !important -} - -.py-5 { - padding-top: 3rem !important; - padding-bottom: 3rem !important -} - -.pt-0 { - padding-top: 0 !important -} - -.pt-1 { - padding-top: .25rem !important -} - -.pt-2 { - padding-top: .5rem !important -} - -.pt-3 { - padding-top: 1rem !important -} - -.pt-4 { - padding-top: 1.5rem !important -} - -.pt-5 { - padding-top: 3rem !important -} - -.pe-0 { - padding-right: 0 !important -} - -.pe-1 { - padding-right: .25rem !important -} - -.pe-2 { - padding-right: .5rem !important -} - -.pe-3 { - padding-right: 1rem !important -} - -.pe-4 { - padding-right: 1.5rem !important -} - -.pe-5 { - padding-right: 3rem !important -} - -.pb-0 { - padding-bottom: 0 !important -} - -.pb-1 { - padding-bottom: .25rem !important -} - -.pb-2 { - padding-bottom: .5rem !important -} - -.pb-3 { - padding-bottom: 1rem !important -} - -.pb-4 { - padding-bottom: 1.5rem !important -} - -.pb-5 { - padding-bottom: 3rem !important -} - -.ps-0 { - padding-left: 0 !important -} - -.ps-1 { - padding-left: .25rem !important -} - -.ps-2 { - padding-left: .5rem !important -} - -.ps-3 { - padding-left: 1rem !important -} - -.ps-4 { - padding-left: 1.5rem !important -} - -.ps-5 { - padding-left: 3rem !important -} - -.gap-0 { - gap: 0 !important -} - -.gap-1 { - gap: .25rem !important -} - -.gap-2 { - gap: .5rem !important -} - -.gap-3 { - gap: 1rem !important -} - -.gap-4 { - gap: 1.5rem !important -} - -.gap-5 { - gap: 3rem !important -} - -.font-monospace { - font-family: var(--bs-font-monospace) !important -} - -.fs-1 { - font-size: calc(1.375rem + 1.5vw) !important -} - -.fs-2 { - font-size: calc(1.325rem + .9vw) !important -} - -.fs-3 { - font-size: calc(1.3rem + .6vw) !important -} - -.fs-4 { - font-size: calc(1.275rem + .3vw) !important -} - -.fs-5 { - font-size: 1.25rem !important -} - -.fs-6 { - font-size: 1rem !important -} - -.fst-italic { - font-style: italic !important -} - -.fst-normal { - font-style: normal !important -} - -.fw-light { - font-weight: 300 !important -} - -.fw-lighter { - font-weight: lighter !important -} - -.fw-normal { - font-weight: 400 !important -} - -.fw-bold { - font-weight: 700 !important -} - -.fw-semibold { - font-weight: 600 !important -} - -.fw-bolder { - font-weight: bolder !important -} - -.lh-1 { - line-height: 1 !important -} - -.lh-sm { - line-height: 1.25 !important -} - -.lh-base { - line-height: 1.5 !important -} - -.lh-lg { - line-height: 2 !important -} - -.text-start { - text-align: left !important -} - -.text-end { - text-align: right !important -} - -.text-center { - text-align: center !important -} - -.text-decoration-none { - text-decoration: none !important -} - -.text-decoration-underline { - text-decoration: underline !important -} - -.text-decoration-line-through { - text-decoration: line-through !important -} - -.text-lowercase { - text-transform: lowercase !important -} - -.text-uppercase { - text-transform: uppercase !important -} - -.text-capitalize { - text-transform: capitalize !important -} - -.text-wrap { - white-space: normal !important -} - -.text-nowrap { - white-space: nowrap !important -} - -.text-break { - word-wrap: break-word !important; - word-break: break-word !important -} - -.text-default { - --bs-text-opacity: 1; - color: rgba(var(--bs-default-rgb), var(--bs-text-opacity)) !important -} - -.text-primary { - --bs-text-opacity: 1; - color: rgba(var(--bs-primary-rgb), var(--bs-text-opacity)) !important -} - -.text-secondary { - --bs-text-opacity: 1; - color: rgba(var(--bs-secondary-rgb), var(--bs-text-opacity)) !important -} - -.text-success { - --bs-text-opacity: 1; - color: rgba(var(--bs-success-rgb), var(--bs-text-opacity)) !important -} - -.text-info { - --bs-text-opacity: 1; - color: rgba(var(--bs-info-rgb), var(--bs-text-opacity)) !important -} - -.text-warning { - --bs-text-opacity: 1; - color: rgba(var(--bs-warning-rgb), var(--bs-text-opacity)) !important -} - -.text-danger { - --bs-text-opacity: 1; - color: rgba(var(--bs-danger-rgb), var(--bs-text-opacity)) !important -} - -.text-light { - --bs-text-opacity: 1; - color: rgba(var(--bs-light-rgb), var(--bs-text-opacity)) !important -} - -.text-dark { - --bs-text-opacity: 1; - color: rgba(var(--bs-dark-rgb), var(--bs-text-opacity)) !important -} - -.text-black { - --bs-text-opacity: 1; - color: rgba(var(--bs-black-rgb), var(--bs-text-opacity)) !important -} - -.text-white { - --bs-text-opacity: 1; - color: rgba(var(--bs-white-rgb), var(--bs-text-opacity)) !important -} - -.text-body { - --bs-text-opacity: 1; - color: rgba(var(--bs-body-color-rgb), var(--bs-text-opacity)) !important -} - -.text-muted, -.help-text, -.help-block { - --bs-text-opacity: 1; - color: #8e8c84 !important -} - -.text-black-50 { - --bs-text-opacity: 1; - color: rgba(0, 0, 0, 0.5) !important -} - -.text-white-50 { - --bs-text-opacity: 1; - color: rgba(255, 255, 255, 0.5) !important -} - -.text-reset { - --bs-text-opacity: 1; - color: inherit !important -} - -.text-opacity-25 { - --bs-text-opacity: .25 -} - -.text-opacity-50 { - --bs-text-opacity: .5 -} - -.text-opacity-75 { - --bs-text-opacity: .75 -} - -.text-opacity-100 { - --bs-text-opacity: 1 -} - -.bg-default { - --bs-bg-opacity: 1; - background-color: rgba(var(--bs-default-rgb), var(--bs-bg-opacity)) !important -} - -.bg-primary { - --bs-bg-opacity: 1; - background-color: rgba(var(--bs-primary-rgb), var(--bs-bg-opacity)) !important -} - -.bg-secondary { - --bs-bg-opacity: 1; - background-color: rgba(var(--bs-secondary-rgb), var(--bs-bg-opacity)) !important -} - -.bg-success { - --bs-bg-opacity: 1; - background-color: rgba(var(--bs-success-rgb), var(--bs-bg-opacity)) !important -} - -.bg-info { - --bs-bg-opacity: 1; - background-color: rgba(var(--bs-info-rgb), var(--bs-bg-opacity)) !important -} - -.bg-warning { - --bs-bg-opacity: 1; - background-color: rgba(var(--bs-warning-rgb), var(--bs-bg-opacity)) !important -} - -.bg-danger { - --bs-bg-opacity: 1; - background-color: rgba(var(--bs-danger-rgb), var(--bs-bg-opacity)) !important -} - -.bg-light { - --bs-bg-opacity: 1; - background-color: rgba(var(--bs-light-rgb), var(--bs-bg-opacity)) !important -} - -.bg-dark { - --bs-bg-opacity: 1; - background-color: rgba(var(--bs-dark-rgb), var(--bs-bg-opacity)) !important -} - -.bg-black { - --bs-bg-opacity: 1; - background-color: rgba(var(--bs-black-rgb), var(--bs-bg-opacity)) !important -} - -.bg-white { - --bs-bg-opacity: 1; - background-color: rgba(var(--bs-white-rgb), var(--bs-bg-opacity)) !important -} - -.bg-body { - --bs-bg-opacity: 1; - background-color: rgba(var(--bs-body-bg-rgb), var(--bs-bg-opacity)) !important -} - -.bg-transparent { - --bs-bg-opacity: 1; - background-color: rgba(0, 0, 0, 0) !important -} - -.bg-opacity-10 { - --bs-bg-opacity: .1 -} - -.bg-opacity-25 { - --bs-bg-opacity: .25 -} - -.bg-opacity-50 { - --bs-bg-opacity: .5 -} - -.bg-opacity-75 { - --bs-bg-opacity: .75 -} - -.bg-opacity-100 { - --bs-bg-opacity: 1 -} - -.bg-gradient { - background-image: var(--bs-gradient) !important -} - -.user-select-all { - user-select: all !important -} - -.user-select-auto { - user-select: auto !important -} - -.user-select-none { - user-select: none !important -} - -.pe-none { - pointer-events: none !important -} - -.pe-auto { - pointer-events: auto !important -} - -.rounded { - border-radius: var(--bs-border-radius) !important -} - -.rounded-0 { - border-radius: 0 !important -} - -.rounded-1 { - border-radius: var(--bs-border-radius-sm) !important -} - -.rounded-2 { - border-radius: var(--bs-border-radius) !important -} - -.rounded-3 { - border-radius: var(--bs-border-radius-lg) !important -} - -.rounded-4 { - border-radius: var(--bs-border-radius-xl) !important -} - -.rounded-5 { - border-radius: var(--bs-border-radius-2xl) !important -} - -.rounded-circle { - border-radius: 50% !important -} - -.rounded-pill { - border-radius: var(--bs-border-radius-pill) !important -} - -.rounded-top { - border-top-left-radius: var(--bs-border-radius) !important; - border-top-right-radius: var(--bs-border-radius) !important -} - -.rounded-end { - border-top-right-radius: var(--bs-border-radius) !important; - border-bottom-right-radius: var(--bs-border-radius) !important -} - -.rounded-bottom { - border-bottom-right-radius: var(--bs-border-radius) !important; - border-bottom-left-radius: var(--bs-border-radius) !important -} - -.rounded-start { - border-bottom-left-radius: var(--bs-border-radius) !important; - border-top-left-radius: var(--bs-border-radius) !important -} - -.visible { - visibility: visible !important -} - -.invisible { - visibility: hidden !important -} - -@media (min-width: 576px) { - .float-sm-start { - float: left !important - } - - .float-sm-end { - float: right !important - } - - .float-sm-none { - float: none !important - } - - .d-sm-inline { - display: inline !important - } - - .d-sm-inline-block { - display: inline-block !important - } - - .d-sm-block { - display: block !important - } - - .d-sm-grid { - display: grid !important - } - - .d-sm-table { - display: table !important - } - - .d-sm-table-row { - display: table-row !important - } - - .d-sm-table-cell { - display: table-cell !important - } - - .d-sm-flex { - display: flex !important - } - - .d-sm-inline-flex { - display: inline-flex !important - } - - .d-sm-none { - display: none !important - } - - .flex-sm-fill { - flex: 1 1 auto !important - } - - .flex-sm-row { - flex-direction: row !important - } - - .flex-sm-column { - flex-direction: column !important - } - - .flex-sm-row-reverse { - flex-direction: row-reverse !important - } - - .flex-sm-column-reverse { - flex-direction: column-reverse !important - } - - .flex-sm-grow-0 { - flex-grow: 0 !important - } - - .flex-sm-grow-1 { - flex-grow: 1 !important - } - - .flex-sm-shrink-0 { - flex-shrink: 0 !important - } - - .flex-sm-shrink-1 { - flex-shrink: 1 !important - } - - .flex-sm-wrap { - flex-wrap: wrap !important - } - - .flex-sm-nowrap { - flex-wrap: nowrap !important - } - - .flex-sm-wrap-reverse { - flex-wrap: wrap-reverse !important - } - - .justify-content-sm-start { - justify-content: flex-start !important - } - - .justify-content-sm-end { - justify-content: flex-end !important - } - - .justify-content-sm-center { - justify-content: center !important - } - - .justify-content-sm-between { - justify-content: space-between !important - } - - .justify-content-sm-around { - justify-content: space-around !important - } - - .justify-content-sm-evenly { - justify-content: space-evenly !important - } - - .align-items-sm-start { - align-items: flex-start !important - } - - .align-items-sm-end { - align-items: flex-end !important - } - - .align-items-sm-center { - align-items: center !important - } - - .align-items-sm-baseline { - align-items: baseline !important - } - - .align-items-sm-stretch { - align-items: stretch !important - } - - .align-content-sm-start { - align-content: flex-start !important - } - - .align-content-sm-end { - align-content: flex-end !important - } - - .align-content-sm-center { - align-content: center !important - } - - .align-content-sm-between { - align-content: space-between !important - } - - .align-content-sm-around { - align-content: space-around !important - } - - .align-content-sm-stretch { - align-content: stretch !important - } - - .align-self-sm-auto { - align-self: auto !important - } - - .align-self-sm-start { - align-self: flex-start !important - } - - .align-self-sm-end { - align-self: flex-end !important - } - - .align-self-sm-center { - align-self: center !important - } - - .align-self-sm-baseline { - align-self: baseline !important - } - - .align-self-sm-stretch { - align-self: stretch !important - } - - .order-sm-first { - order: -1 !important - } - - .order-sm-0 { - order: 0 !important - } - - .order-sm-1 { - order: 1 !important - } - - .order-sm-2 { - order: 2 !important - } - - .order-sm-3 { - order: 3 !important - } - - .order-sm-4 { - order: 4 !important - } - - .order-sm-5 { - order: 5 !important - } - - .order-sm-last { - order: 6 !important - } - - .m-sm-0 { - margin: 0 !important - } - - .m-sm-1 { - margin: .25rem !important - } - - .m-sm-2 { - margin: .5rem !important - } - - .m-sm-3 { - margin: 1rem !important - } - - .m-sm-4 { - margin: 1.5rem !important - } - - .m-sm-5 { - margin: 3rem !important - } - - .m-sm-auto { - margin: auto !important - } - - .mx-sm-0 { - margin-right: 0 !important; - margin-left: 0 !important - } - - .mx-sm-1 { - margin-right: .25rem !important; - margin-left: .25rem !important - } - - .mx-sm-2 { - margin-right: .5rem !important; - margin-left: .5rem !important - } - - .mx-sm-3 { - margin-right: 1rem !important; - margin-left: 1rem !important - } - - .mx-sm-4 { - margin-right: 1.5rem !important; - margin-left: 1.5rem !important - } - - .mx-sm-5 { - margin-right: 3rem !important; - margin-left: 3rem !important - } - - .mx-sm-auto { - margin-right: auto !important; - margin-left: auto !important - } - - .my-sm-0 { - margin-top: 0 !important; - margin-bottom: 0 !important - } - - .my-sm-1 { - margin-top: .25rem !important; - margin-bottom: .25rem !important - } - - .my-sm-2 { - margin-top: .5rem !important; - margin-bottom: .5rem !important - } - - .my-sm-3 { - margin-top: 1rem !important; - margin-bottom: 1rem !important - } - - .my-sm-4 { - margin-top: 1.5rem !important; - margin-bottom: 1.5rem !important - } - - .my-sm-5 { - margin-top: 3rem !important; - margin-bottom: 3rem !important - } - - .my-sm-auto { - margin-top: auto !important; - margin-bottom: auto !important - } - - .mt-sm-0 { - margin-top: 0 !important - } - - .mt-sm-1 { - margin-top: .25rem !important - } - - .mt-sm-2 { - margin-top: .5rem !important - } - - .mt-sm-3 { - margin-top: 1rem !important - } - - .mt-sm-4 { - margin-top: 1.5rem !important - } - - .mt-sm-5 { - margin-top: 3rem !important - } - - .mt-sm-auto { - margin-top: auto !important - } - - .me-sm-0 { - margin-right: 0 !important - } - - .me-sm-1 { - margin-right: .25rem !important - } - - .me-sm-2 { - margin-right: .5rem !important - } - - .me-sm-3 { - margin-right: 1rem !important - } - - .me-sm-4 { - margin-right: 1.5rem !important - } - - .me-sm-5 { - margin-right: 3rem !important - } - - .me-sm-auto { - margin-right: auto !important - } - - .mb-sm-0 { - margin-bottom: 0 !important - } - - .mb-sm-1 { - margin-bottom: .25rem !important - } - - .mb-sm-2 { - margin-bottom: .5rem !important - } - - .mb-sm-3 { - margin-bottom: 1rem !important - } - - .mb-sm-4 { - margin-bottom: 1.5rem !important - } - - .mb-sm-5 { - margin-bottom: 3rem !important - } - - .mb-sm-auto { - margin-bottom: auto !important - } - - .ms-sm-0 { - margin-left: 0 !important - } - - .ms-sm-1 { - margin-left: .25rem !important - } - - .ms-sm-2 { - margin-left: .5rem !important - } - - .ms-sm-3 { - margin-left: 1rem !important - } - - .ms-sm-4 { - margin-left: 1.5rem !important - } - - .ms-sm-5 { - margin-left: 3rem !important - } - - .ms-sm-auto { - margin-left: auto !important - } - - .p-sm-0 { - padding: 0 !important - } - - .p-sm-1 { - padding: .25rem !important - } - - .p-sm-2 { - padding: .5rem !important - } - - .p-sm-3 { - padding: 1rem !important - } - - .p-sm-4 { - padding: 1.5rem !important - } - - .p-sm-5 { - padding: 3rem !important - } - - .px-sm-0 { - padding-right: 0 !important; - padding-left: 0 !important - } - - .px-sm-1 { - padding-right: .25rem !important; - padding-left: .25rem !important - } - - .px-sm-2 { - padding-right: .5rem !important; - padding-left: .5rem !important - } - - .px-sm-3 { - padding-right: 1rem !important; - padding-left: 1rem !important - } - - .px-sm-4 { - padding-right: 1.5rem !important; - padding-left: 1.5rem !important - } - - .px-sm-5 { - padding-right: 3rem !important; - padding-left: 3rem !important - } - - .py-sm-0 { - padding-top: 0 !important; - padding-bottom: 0 !important - } - - .py-sm-1 { - padding-top: .25rem !important; - padding-bottom: .25rem !important - } - - .py-sm-2 { - padding-top: .5rem !important; - padding-bottom: .5rem !important - } - - .py-sm-3 { - padding-top: 1rem !important; - padding-bottom: 1rem !important - } - - .py-sm-4 { - padding-top: 1.5rem !important; - padding-bottom: 1.5rem !important - } - - .py-sm-5 { - padding-top: 3rem !important; - padding-bottom: 3rem !important - } - - .pt-sm-0 { - padding-top: 0 !important - } - - .pt-sm-1 { - padding-top: .25rem !important - } - - .pt-sm-2 { - padding-top: .5rem !important - } - - .pt-sm-3 { - padding-top: 1rem !important - } - - .pt-sm-4 { - padding-top: 1.5rem !important - } - - .pt-sm-5 { - padding-top: 3rem !important - } - - .pe-sm-0 { - padding-right: 0 !important - } - - .pe-sm-1 { - padding-right: .25rem !important - } - - .pe-sm-2 { - padding-right: .5rem !important - } - - .pe-sm-3 { - padding-right: 1rem !important - } - - .pe-sm-4 { - padding-right: 1.5rem !important - } - - .pe-sm-5 { - padding-right: 3rem !important - } - - .pb-sm-0 { - padding-bottom: 0 !important - } - - .pb-sm-1 { - padding-bottom: .25rem !important - } - - .pb-sm-2 { - padding-bottom: .5rem !important - } - - .pb-sm-3 { - padding-bottom: 1rem !important - } - - .pb-sm-4 { - padding-bottom: 1.5rem !important - } - - .pb-sm-5 { - padding-bottom: 3rem !important - } - - .ps-sm-0 { - padding-left: 0 !important - } - - .ps-sm-1 { - padding-left: .25rem !important - } - - .ps-sm-2 { - padding-left: .5rem !important - } - - .ps-sm-3 { - padding-left: 1rem !important - } - - .ps-sm-4 { - padding-left: 1.5rem !important - } - - .ps-sm-5 { - padding-left: 3rem !important - } - - .gap-sm-0 { - gap: 0 !important - } - - .gap-sm-1 { - gap: .25rem !important - } - - .gap-sm-2 { - gap: .5rem !important - } - - .gap-sm-3 { - gap: 1rem !important - } - - .gap-sm-4 { - gap: 1.5rem !important - } - - .gap-sm-5 { - gap: 3rem !important - } - - .text-sm-start { - text-align: left !important - } - - .text-sm-end { - text-align: right !important - } - - .text-sm-center { - text-align: center !important - } -} - -@media (min-width: 768px) { - .float-md-start { - float: left !important - } - - .float-md-end { - float: right !important - } - - .float-md-none { - float: none !important - } - - .d-md-inline { - display: inline !important - } - - .d-md-inline-block { - display: inline-block !important - } - - .d-md-block { - display: block !important - } - - .d-md-grid { - display: grid !important - } - - .d-md-table { - display: table !important - } - - .d-md-table-row { - display: table-row !important - } - - .d-md-table-cell { - display: table-cell !important - } - - .d-md-flex { - display: flex !important - } - - .d-md-inline-flex { - display: inline-flex !important - } - - .d-md-none { - display: none !important - } - - .flex-md-fill { - flex: 1 1 auto !important - } - - .flex-md-row { - flex-direction: row !important - } - - .flex-md-column { - flex-direction: column !important - } - - .flex-md-row-reverse { - flex-direction: row-reverse !important - } - - .flex-md-column-reverse { - flex-direction: column-reverse !important - } - - .flex-md-grow-0 { - flex-grow: 0 !important - } - - .flex-md-grow-1 { - flex-grow: 1 !important - } - - .flex-md-shrink-0 { - flex-shrink: 0 !important - } - - .flex-md-shrink-1 { - flex-shrink: 1 !important - } - - .flex-md-wrap { - flex-wrap: wrap !important - } - - .flex-md-nowrap { - flex-wrap: nowrap !important - } - - .flex-md-wrap-reverse { - flex-wrap: wrap-reverse !important - } - - .justify-content-md-start { - justify-content: flex-start !important - } - - .justify-content-md-end { - justify-content: flex-end !important - } - - .justify-content-md-center { - justify-content: center !important - } - - .justify-content-md-between { - justify-content: space-between !important - } - - .justify-content-md-around { - justify-content: space-around !important - } - - .justify-content-md-evenly { - justify-content: space-evenly !important - } - - .align-items-md-start { - align-items: flex-start !important - } - - .align-items-md-end { - align-items: flex-end !important - } - - .align-items-md-center { - align-items: center !important - } - - .align-items-md-baseline { - align-items: baseline !important - } - - .align-items-md-stretch { - align-items: stretch !important - } - - .align-content-md-start { - align-content: flex-start !important - } - - .align-content-md-end { - align-content: flex-end !important - } - - .align-content-md-center { - align-content: center !important - } - - .align-content-md-between { - align-content: space-between !important - } - - .align-content-md-around { - align-content: space-around !important - } - - .align-content-md-stretch { - align-content: stretch !important - } - - .align-self-md-auto { - align-self: auto !important - } - - .align-self-md-start { - align-self: flex-start !important - } - - .align-self-md-end { - align-self: flex-end !important - } - - .align-self-md-center { - align-self: center !important - } - - .align-self-md-baseline { - align-self: baseline !important - } - - .align-self-md-stretch { - align-self: stretch !important - } - - .order-md-first { - order: -1 !important - } - - .order-md-0 { - order: 0 !important - } - - .order-md-1 { - order: 1 !important - } - - .order-md-2 { - order: 2 !important - } - - .order-md-3 { - order: 3 !important - } - - .order-md-4 { - order: 4 !important - } - - .order-md-5 { - order: 5 !important - } - - .order-md-last { - order: 6 !important - } - - .m-md-0 { - margin: 0 !important - } - - .m-md-1 { - margin: .25rem !important - } - - .m-md-2 { - margin: .5rem !important - } - - .m-md-3 { - margin: 1rem !important - } - - .m-md-4 { - margin: 1.5rem !important - } - - .m-md-5 { - margin: 3rem !important - } - - .m-md-auto { - margin: auto !important - } - - .mx-md-0 { - margin-right: 0 !important; - margin-left: 0 !important - } - - .mx-md-1 { - margin-right: .25rem !important; - margin-left: .25rem !important - } - - .mx-md-2 { - margin-right: .5rem !important; - margin-left: .5rem !important - } - - .mx-md-3 { - margin-right: 1rem !important; - margin-left: 1rem !important - } - - .mx-md-4 { - margin-right: 1.5rem !important; - margin-left: 1.5rem !important - } - - .mx-md-5 { - margin-right: 3rem !important; - margin-left: 3rem !important - } - - .mx-md-auto { - margin-right: auto !important; - margin-left: auto !important - } - - .my-md-0 { - margin-top: 0 !important; - margin-bottom: 0 !important - } - - .my-md-1 { - margin-top: .25rem !important; - margin-bottom: .25rem !important - } - - .my-md-2 { - margin-top: .5rem !important; - margin-bottom: .5rem !important - } - - .my-md-3 { - margin-top: 1rem !important; - margin-bottom: 1rem !important - } - - .my-md-4 { - margin-top: 1.5rem !important; - margin-bottom: 1.5rem !important - } - - .my-md-5 { - margin-top: 3rem !important; - margin-bottom: 3rem !important - } - - .my-md-auto { - margin-top: auto !important; - margin-bottom: auto !important - } - - .mt-md-0 { - margin-top: 0 !important - } - - .mt-md-1 { - margin-top: .25rem !important - } - - .mt-md-2 { - margin-top: .5rem !important - } - - .mt-md-3 { - margin-top: 1rem !important - } - - .mt-md-4 { - margin-top: 1.5rem !important - } - - .mt-md-5 { - margin-top: 3rem !important - } - - .mt-md-auto { - margin-top: auto !important - } - - .me-md-0 { - margin-right: 0 !important - } - - .me-md-1 { - margin-right: .25rem !important - } - - .me-md-2 { - margin-right: .5rem !important - } - - .me-md-3 { - margin-right: 1rem !important - } - - .me-md-4 { - margin-right: 1.5rem !important - } - - .me-md-5 { - margin-right: 3rem !important - } - - .me-md-auto { - margin-right: auto !important - } - - .mb-md-0 { - margin-bottom: 0 !important - } - - .mb-md-1 { - margin-bottom: .25rem !important - } - - .mb-md-2 { - margin-bottom: .5rem !important - } - - .mb-md-3 { - margin-bottom: 1rem !important - } - - .mb-md-4 { - margin-bottom: 1.5rem !important - } - - .mb-md-5 { - margin-bottom: 3rem !important - } - - .mb-md-auto { - margin-bottom: auto !important - } - - .ms-md-0 { - margin-left: 0 !important - } - - .ms-md-1 { - margin-left: .25rem !important - } - - .ms-md-2 { - margin-left: .5rem !important - } - - .ms-md-3 { - margin-left: 1rem !important - } - - .ms-md-4 { - margin-left: 1.5rem !important - } - - .ms-md-5 { - margin-left: 3rem !important - } - - .ms-md-auto { - margin-left: auto !important - } - - .p-md-0 { - padding: 0 !important - } - - .p-md-1 { - padding: .25rem !important - } - - .p-md-2 { - padding: .5rem !important - } - - .p-md-3 { - padding: 1rem !important - } - - .p-md-4 { - padding: 1.5rem !important - } - - .p-md-5 { - padding: 3rem !important - } - - .px-md-0 { - padding-right: 0 !important; - padding-left: 0 !important - } - - .px-md-1 { - padding-right: .25rem !important; - padding-left: .25rem !important - } - - .px-md-2 { - padding-right: .5rem !important; - padding-left: .5rem !important - } - - .px-md-3 { - padding-right: 1rem !important; - padding-left: 1rem !important - } - - .px-md-4 { - padding-right: 1.5rem !important; - padding-left: 1.5rem !important - } - - .px-md-5 { - padding-right: 3rem !important; - padding-left: 3rem !important - } - - .py-md-0 { - padding-top: 0 !important; - padding-bottom: 0 !important - } - - .py-md-1 { - padding-top: .25rem !important; - padding-bottom: .25rem !important - } - - .py-md-2 { - padding-top: .5rem !important; - padding-bottom: .5rem !important - } - - .py-md-3 { - padding-top: 1rem !important; - padding-bottom: 1rem !important - } - - .py-md-4 { - padding-top: 1.5rem !important; - padding-bottom: 1.5rem !important - } - - .py-md-5 { - padding-top: 3rem !important; - padding-bottom: 3rem !important - } - - .pt-md-0 { - padding-top: 0 !important - } - - .pt-md-1 { - padding-top: .25rem !important - } - - .pt-md-2 { - padding-top: .5rem !important - } - - .pt-md-3 { - padding-top: 1rem !important - } - - .pt-md-4 { - padding-top: 1.5rem !important - } - - .pt-md-5 { - padding-top: 3rem !important - } - - .pe-md-0 { - padding-right: 0 !important - } - - .pe-md-1 { - padding-right: .25rem !important - } - - .pe-md-2 { - padding-right: .5rem !important - } - - .pe-md-3 { - padding-right: 1rem !important - } - - .pe-md-4 { - padding-right: 1.5rem !important - } - - .pe-md-5 { - padding-right: 3rem !important - } - - .pb-md-0 { - padding-bottom: 0 !important - } - - .pb-md-1 { - padding-bottom: .25rem !important - } - - .pb-md-2 { - padding-bottom: .5rem !important - } - - .pb-md-3 { - padding-bottom: 1rem !important - } - - .pb-md-4 { - padding-bottom: 1.5rem !important - } - - .pb-md-5 { - padding-bottom: 3rem !important - } - - .ps-md-0 { - padding-left: 0 !important - } - - .ps-md-1 { - padding-left: .25rem !important - } - - .ps-md-2 { - padding-left: .5rem !important - } - - .ps-md-3 { - padding-left: 1rem !important - } - - .ps-md-4 { - padding-left: 1.5rem !important - } - - .ps-md-5 { - padding-left: 3rem !important - } - - .gap-md-0 { - gap: 0 !important - } - - .gap-md-1 { - gap: .25rem !important - } - - .gap-md-2 { - gap: .5rem !important - } - - .gap-md-3 { - gap: 1rem !important - } - - .gap-md-4 { - gap: 1.5rem !important - } - - .gap-md-5 { - gap: 3rem !important - } - - .text-md-start { - text-align: left !important - } - - .text-md-end { - text-align: right !important - } - - .text-md-center { - text-align: center !important - } -} - -@media (min-width: 992px) { - .float-lg-start { - float: left !important - } - - .float-lg-end { - float: right !important - } - - .float-lg-none { - float: none !important - } - - .d-lg-inline { - display: inline !important - } - - .d-lg-inline-block { - display: inline-block !important - } - - .d-lg-block { - display: block !important - } - - .d-lg-grid { - display: grid !important - } - - .d-lg-table { - display: table !important - } - - .d-lg-table-row { - display: table-row !important - } - - .d-lg-table-cell { - display: table-cell !important - } - - .d-lg-flex { - display: flex !important - } - - .d-lg-inline-flex { - display: inline-flex !important - } - - .d-lg-none { - display: none !important - } - - .flex-lg-fill { - flex: 1 1 auto !important - } - - .flex-lg-row { - flex-direction: row !important - } - - .flex-lg-column { - flex-direction: column !important - } - - .flex-lg-row-reverse { - flex-direction: row-reverse !important - } - - .flex-lg-column-reverse { - flex-direction: column-reverse !important - } - - .flex-lg-grow-0 { - flex-grow: 0 !important - } - - .flex-lg-grow-1 { - flex-grow: 1 !important - } - - .flex-lg-shrink-0 { - flex-shrink: 0 !important - } - - .flex-lg-shrink-1 { - flex-shrink: 1 !important - } - - .flex-lg-wrap { - flex-wrap: wrap !important - } - - .flex-lg-nowrap { - flex-wrap: nowrap !important - } - - .flex-lg-wrap-reverse { - flex-wrap: wrap-reverse !important - } - - .justify-content-lg-start { - justify-content: flex-start !important - } - - .justify-content-lg-end { - justify-content: flex-end !important - } - - .justify-content-lg-center { - justify-content: center !important - } - - .justify-content-lg-between { - justify-content: space-between !important - } - - .justify-content-lg-around { - justify-content: space-around !important - } - - .justify-content-lg-evenly { - justify-content: space-evenly !important - } - - .align-items-lg-start { - align-items: flex-start !important - } - - .align-items-lg-end { - align-items: flex-end !important - } - - .align-items-lg-center { - align-items: center !important - } - - .align-items-lg-baseline { - align-items: baseline !important - } - - .align-items-lg-stretch { - align-items: stretch !important - } - - .align-content-lg-start { - align-content: flex-start !important - } - - .align-content-lg-end { - align-content: flex-end !important - } - - .align-content-lg-center { - align-content: center !important - } - - .align-content-lg-between { - align-content: space-between !important - } - - .align-content-lg-around { - align-content: space-around !important - } - - .align-content-lg-stretch { - align-content: stretch !important - } - - .align-self-lg-auto { - align-self: auto !important - } - - .align-self-lg-start { - align-self: flex-start !important - } - - .align-self-lg-end { - align-self: flex-end !important - } - - .align-self-lg-center { - align-self: center !important - } - - .align-self-lg-baseline { - align-self: baseline !important - } - - .align-self-lg-stretch { - align-self: stretch !important - } - - .order-lg-first { - order: -1 !important - } - - .order-lg-0 { - order: 0 !important - } - - .order-lg-1 { - order: 1 !important - } - - .order-lg-2 { - order: 2 !important - } - - .order-lg-3 { - order: 3 !important - } - - .order-lg-4 { - order: 4 !important - } - - .order-lg-5 { - order: 5 !important - } - - .order-lg-last { - order: 6 !important - } - - .m-lg-0 { - margin: 0 !important - } - - .m-lg-1 { - margin: .25rem !important - } - - .m-lg-2 { - margin: .5rem !important - } - - .m-lg-3 { - margin: 1rem !important - } - - .m-lg-4 { - margin: 1.5rem !important - } - - .m-lg-5 { - margin: 3rem !important - } - - .m-lg-auto { - margin: auto !important - } - - .mx-lg-0 { - margin-right: 0 !important; - margin-left: 0 !important - } - - .mx-lg-1 { - margin-right: .25rem !important; - margin-left: .25rem !important - } - - .mx-lg-2 { - margin-right: .5rem !important; - margin-left: .5rem !important - } - - .mx-lg-3 { - margin-right: 1rem !important; - margin-left: 1rem !important - } - - .mx-lg-4 { - margin-right: 1.5rem !important; - margin-left: 1.5rem !important - } - - .mx-lg-5 { - margin-right: 3rem !important; - margin-left: 3rem !important - } - - .mx-lg-auto { - margin-right: auto !important; - margin-left: auto !important - } - - .my-lg-0 { - margin-top: 0 !important; - margin-bottom: 0 !important - } - - .my-lg-1 { - margin-top: .25rem !important; - margin-bottom: .25rem !important - } - - .my-lg-2 { - margin-top: .5rem !important; - margin-bottom: .5rem !important - } - - .my-lg-3 { - margin-top: 1rem !important; - margin-bottom: 1rem !important - } - - .my-lg-4 { - margin-top: 1.5rem !important; - margin-bottom: 1.5rem !important - } - - .my-lg-5 { - margin-top: 3rem !important; - margin-bottom: 3rem !important - } - - .my-lg-auto { - margin-top: auto !important; - margin-bottom: auto !important - } - - .mt-lg-0 { - margin-top: 0 !important - } - - .mt-lg-1 { - margin-top: .25rem !important - } - - .mt-lg-2 { - margin-top: .5rem !important - } - - .mt-lg-3 { - margin-top: 1rem !important - } - - .mt-lg-4 { - margin-top: 1.5rem !important - } - - .mt-lg-5 { - margin-top: 3rem !important - } - - .mt-lg-auto { - margin-top: auto !important - } - - .me-lg-0 { - margin-right: 0 !important - } - - .me-lg-1 { - margin-right: .25rem !important - } - - .me-lg-2 { - margin-right: .5rem !important - } - - .me-lg-3 { - margin-right: 1rem !important - } - - .me-lg-4 { - margin-right: 1.5rem !important - } - - .me-lg-5 { - margin-right: 3rem !important - } - - .me-lg-auto { - margin-right: auto !important - } - - .mb-lg-0 { - margin-bottom: 0 !important - } - - .mb-lg-1 { - margin-bottom: .25rem !important - } - - .mb-lg-2 { - margin-bottom: .5rem !important - } - - .mb-lg-3 { - margin-bottom: 1rem !important - } - - .mb-lg-4 { - margin-bottom: 1.5rem !important - } - - .mb-lg-5 { - margin-bottom: 3rem !important - } - - .mb-lg-auto { - margin-bottom: auto !important - } - - .ms-lg-0 { - margin-left: 0 !important - } - - .ms-lg-1 { - margin-left: .25rem !important - } - - .ms-lg-2 { - margin-left: .5rem !important - } - - .ms-lg-3 { - margin-left: 1rem !important - } - - .ms-lg-4 { - margin-left: 1.5rem !important - } - - .ms-lg-5 { - margin-left: 3rem !important - } - - .ms-lg-auto { - margin-left: auto !important - } - - .p-lg-0 { - padding: 0 !important - } - - .p-lg-1 { - padding: .25rem !important - } - - .p-lg-2 { - padding: .5rem !important - } - - .p-lg-3 { - padding: 1rem !important - } - - .p-lg-4 { - padding: 1.5rem !important - } - - .p-lg-5 { - padding: 3rem !important - } - - .px-lg-0 { - padding-right: 0 !important; - padding-left: 0 !important - } - - .px-lg-1 { - padding-right: .25rem !important; - padding-left: .25rem !important - } - - .px-lg-2 { - padding-right: .5rem !important; - padding-left: .5rem !important - } - - .px-lg-3 { - padding-right: 1rem !important; - padding-left: 1rem !important - } - - .px-lg-4 { - padding-right: 1.5rem !important; - padding-left: 1.5rem !important - } - - .px-lg-5 { - padding-right: 3rem !important; - padding-left: 3rem !important - } - - .py-lg-0 { - padding-top: 0 !important; - padding-bottom: 0 !important - } - - .py-lg-1 { - padding-top: .25rem !important; - padding-bottom: .25rem !important - } - - .py-lg-2 { - padding-top: .5rem !important; - padding-bottom: .5rem !important - } - - .py-lg-3 { - padding-top: 1rem !important; - padding-bottom: 1rem !important - } - - .py-lg-4 { - padding-top: 1.5rem !important; - padding-bottom: 1.5rem !important - } - - .py-lg-5 { - padding-top: 3rem !important; - padding-bottom: 3rem !important - } - - .pt-lg-0 { - padding-top: 0 !important - } - - .pt-lg-1 { - padding-top: .25rem !important - } - - .pt-lg-2 { - padding-top: .5rem !important - } - - .pt-lg-3 { - padding-top: 1rem !important - } - - .pt-lg-4 { - padding-top: 1.5rem !important - } - - .pt-lg-5 { - padding-top: 3rem !important - } - - .pe-lg-0 { - padding-right: 0 !important - } - - .pe-lg-1 { - padding-right: .25rem !important - } - - .pe-lg-2 { - padding-right: .5rem !important - } - - .pe-lg-3 { - padding-right: 1rem !important - } - - .pe-lg-4 { - padding-right: 1.5rem !important - } - - .pe-lg-5 { - padding-right: 3rem !important - } - - .pb-lg-0 { - padding-bottom: 0 !important - } - - .pb-lg-1 { - padding-bottom: .25rem !important - } - - .pb-lg-2 { - padding-bottom: .5rem !important - } - - .pb-lg-3 { - padding-bottom: 1rem !important - } - - .pb-lg-4 { - padding-bottom: 1.5rem !important - } - - .pb-lg-5 { - padding-bottom: 3rem !important - } - - .ps-lg-0 { - padding-left: 0 !important - } - - .ps-lg-1 { - padding-left: .25rem !important - } - - .ps-lg-2 { - padding-left: .5rem !important - } - - .ps-lg-3 { - padding-left: 1rem !important - } - - .ps-lg-4 { - padding-left: 1.5rem !important - } - - .ps-lg-5 { - padding-left: 3rem !important - } - - .gap-lg-0 { - gap: 0 !important - } - - .gap-lg-1 { - gap: .25rem !important - } - - .gap-lg-2 { - gap: .5rem !important - } - - .gap-lg-3 { - gap: 1rem !important - } - - .gap-lg-4 { - gap: 1.5rem !important - } - - .gap-lg-5 { - gap: 3rem !important - } - - .text-lg-start { - text-align: left !important - } - - .text-lg-end { - text-align: right !important - } - - .text-lg-center { - text-align: center !important - } -} - -@media (min-width: 1200px) { - .float-xl-start { - float: left !important - } - - .float-xl-end { - float: right !important - } - - .float-xl-none { - float: none !important - } - - .d-xl-inline { - display: inline !important - } - - .d-xl-inline-block { - display: inline-block !important - } - - .d-xl-block { - display: block !important - } - - .d-xl-grid { - display: grid !important - } - - .d-xl-table { - display: table !important - } - - .d-xl-table-row { - display: table-row !important - } - - .d-xl-table-cell { - display: table-cell !important - } - - .d-xl-flex { - display: flex !important - } - - .d-xl-inline-flex { - display: inline-flex !important - } - - .d-xl-none { - display: none !important - } - - .flex-xl-fill { - flex: 1 1 auto !important - } - - .flex-xl-row { - flex-direction: row !important - } - - .flex-xl-column { - flex-direction: column !important - } - - .flex-xl-row-reverse { - flex-direction: row-reverse !important - } - - .flex-xl-column-reverse { - flex-direction: column-reverse !important - } - - .flex-xl-grow-0 { - flex-grow: 0 !important - } - - .flex-xl-grow-1 { - flex-grow: 1 !important - } - - .flex-xl-shrink-0 { - flex-shrink: 0 !important - } - - .flex-xl-shrink-1 { - flex-shrink: 1 !important - } - - .flex-xl-wrap { - flex-wrap: wrap !important - } - - .flex-xl-nowrap { - flex-wrap: nowrap !important - } - - .flex-xl-wrap-reverse { - flex-wrap: wrap-reverse !important - } - - .justify-content-xl-start { - justify-content: flex-start !important - } - - .justify-content-xl-end { - justify-content: flex-end !important - } - - .justify-content-xl-center { - justify-content: center !important - } - - .justify-content-xl-between { - justify-content: space-between !important - } - - .justify-content-xl-around { - justify-content: space-around !important - } - - .justify-content-xl-evenly { - justify-content: space-evenly !important - } - - .align-items-xl-start { - align-items: flex-start !important - } - - .align-items-xl-end { - align-items: flex-end !important - } - - .align-items-xl-center { - align-items: center !important - } - - .align-items-xl-baseline { - align-items: baseline !important - } - - .align-items-xl-stretch { - align-items: stretch !important - } - - .align-content-xl-start { - align-content: flex-start !important - } - - .align-content-xl-end { - align-content: flex-end !important - } - - .align-content-xl-center { - align-content: center !important - } - - .align-content-xl-between { - align-content: space-between !important - } - - .align-content-xl-around { - align-content: space-around !important - } - - .align-content-xl-stretch { - align-content: stretch !important - } - - .align-self-xl-auto { - align-self: auto !important - } - - .align-self-xl-start { - align-self: flex-start !important - } - - .align-self-xl-end { - align-self: flex-end !important - } - - .align-self-xl-center { - align-self: center !important - } - - .align-self-xl-baseline { - align-self: baseline !important - } - - .align-self-xl-stretch { - align-self: stretch !important - } - - .order-xl-first { - order: -1 !important - } - - .order-xl-0 { - order: 0 !important - } - - .order-xl-1 { - order: 1 !important - } - - .order-xl-2 { - order: 2 !important - } - - .order-xl-3 { - order: 3 !important - } - - .order-xl-4 { - order: 4 !important - } - - .order-xl-5 { - order: 5 !important - } - - .order-xl-last { - order: 6 !important - } - - .m-xl-0 { - margin: 0 !important - } - - .m-xl-1 { - margin: .25rem !important - } - - .m-xl-2 { - margin: .5rem !important - } - - .m-xl-3 { - margin: 1rem !important - } - - .m-xl-4 { - margin: 1.5rem !important - } - - .m-xl-5 { - margin: 3rem !important - } - - .m-xl-auto { - margin: auto !important - } - - .mx-xl-0 { - margin-right: 0 !important; - margin-left: 0 !important - } - - .mx-xl-1 { - margin-right: .25rem !important; - margin-left: .25rem !important - } - - .mx-xl-2 { - margin-right: .5rem !important; - margin-left: .5rem !important - } - - .mx-xl-3 { - margin-right: 1rem !important; - margin-left: 1rem !important - } - - .mx-xl-4 { - margin-right: 1.5rem !important; - margin-left: 1.5rem !important - } - - .mx-xl-5 { - margin-right: 3rem !important; - margin-left: 3rem !important - } - - .mx-xl-auto { - margin-right: auto !important; - margin-left: auto !important - } - - .my-xl-0 { - margin-top: 0 !important; - margin-bottom: 0 !important - } - - .my-xl-1 { - margin-top: .25rem !important; - margin-bottom: .25rem !important - } - - .my-xl-2 { - margin-top: .5rem !important; - margin-bottom: .5rem !important - } - - .my-xl-3 { - margin-top: 1rem !important; - margin-bottom: 1rem !important - } - - .my-xl-4 { - margin-top: 1.5rem !important; - margin-bottom: 1.5rem !important - } - - .my-xl-5 { - margin-top: 3rem !important; - margin-bottom: 3rem !important - } - - .my-xl-auto { - margin-top: auto !important; - margin-bottom: auto !important - } - - .mt-xl-0 { - margin-top: 0 !important - } - - .mt-xl-1 { - margin-top: .25rem !important - } - - .mt-xl-2 { - margin-top: .5rem !important - } - - .mt-xl-3 { - margin-top: 1rem !important - } - - .mt-xl-4 { - margin-top: 1.5rem !important - } - - .mt-xl-5 { - margin-top: 3rem !important - } - - .mt-xl-auto { - margin-top: auto !important - } - - .me-xl-0 { - margin-right: 0 !important - } - - .me-xl-1 { - margin-right: .25rem !important - } - - .me-xl-2 { - margin-right: .5rem !important - } - - .me-xl-3 { - margin-right: 1rem !important - } - - .me-xl-4 { - margin-right: 1.5rem !important - } - - .me-xl-5 { - margin-right: 3rem !important - } - - .me-xl-auto { - margin-right: auto !important - } - - .mb-xl-0 { - margin-bottom: 0 !important - } - - .mb-xl-1 { - margin-bottom: .25rem !important - } - - .mb-xl-2 { - margin-bottom: .5rem !important - } - - .mb-xl-3 { - margin-bottom: 1rem !important - } - - .mb-xl-4 { - margin-bottom: 1.5rem !important - } - - .mb-xl-5 { - margin-bottom: 3rem !important - } - - .mb-xl-auto { - margin-bottom: auto !important - } - - .ms-xl-0 { - margin-left: 0 !important - } - - .ms-xl-1 { - margin-left: .25rem !important - } - - .ms-xl-2 { - margin-left: .5rem !important - } - - .ms-xl-3 { - margin-left: 1rem !important - } - - .ms-xl-4 { - margin-left: 1.5rem !important - } - - .ms-xl-5 { - margin-left: 3rem !important - } - - .ms-xl-auto { - margin-left: auto !important - } - - .p-xl-0 { - padding: 0 !important - } - - .p-xl-1 { - padding: .25rem !important - } - - .p-xl-2 { - padding: .5rem !important - } - - .p-xl-3 { - padding: 1rem !important - } - - .p-xl-4 { - padding: 1.5rem !important - } - - .p-xl-5 { - padding: 3rem !important - } - - .px-xl-0 { - padding-right: 0 !important; - padding-left: 0 !important - } - - .px-xl-1 { - padding-right: .25rem !important; - padding-left: .25rem !important - } - - .px-xl-2 { - padding-right: .5rem !important; - padding-left: .5rem !important - } - - .px-xl-3 { - padding-right: 1rem !important; - padding-left: 1rem !important - } - - .px-xl-4 { - padding-right: 1.5rem !important; - padding-left: 1.5rem !important - } - - .px-xl-5 { - padding-right: 3rem !important; - padding-left: 3rem !important - } - - .py-xl-0 { - padding-top: 0 !important; - padding-bottom: 0 !important - } - - .py-xl-1 { - padding-top: .25rem !important; - padding-bottom: .25rem !important - } - - .py-xl-2 { - padding-top: .5rem !important; - padding-bottom: .5rem !important - } - - .py-xl-3 { - padding-top: 1rem !important; - padding-bottom: 1rem !important - } - - .py-xl-4 { - padding-top: 1.5rem !important; - padding-bottom: 1.5rem !important - } - - .py-xl-5 { - padding-top: 3rem !important; - padding-bottom: 3rem !important - } - - .pt-xl-0 { - padding-top: 0 !important - } - - .pt-xl-1 { - padding-top: .25rem !important - } - - .pt-xl-2 { - padding-top: .5rem !important - } - - .pt-xl-3 { - padding-top: 1rem !important - } - - .pt-xl-4 { - padding-top: 1.5rem !important - } - - .pt-xl-5 { - padding-top: 3rem !important - } - - .pe-xl-0 { - padding-right: 0 !important - } - - .pe-xl-1 { - padding-right: .25rem !important - } - - .pe-xl-2 { - padding-right: .5rem !important - } - - .pe-xl-3 { - padding-right: 1rem !important - } - - .pe-xl-4 { - padding-right: 1.5rem !important - } - - .pe-xl-5 { - padding-right: 3rem !important - } - - .pb-xl-0 { - padding-bottom: 0 !important - } - - .pb-xl-1 { - padding-bottom: .25rem !important - } - - .pb-xl-2 { - padding-bottom: .5rem !important - } - - .pb-xl-3 { - padding-bottom: 1rem !important - } - - .pb-xl-4 { - padding-bottom: 1.5rem !important - } - - .pb-xl-5 { - padding-bottom: 3rem !important - } - - .ps-xl-0 { - padding-left: 0 !important - } - - .ps-xl-1 { - padding-left: .25rem !important - } - - .ps-xl-2 { - padding-left: .5rem !important - } - - .ps-xl-3 { - padding-left: 1rem !important - } - - .ps-xl-4 { - padding-left: 1.5rem !important - } - - .ps-xl-5 { - padding-left: 3rem !important - } - - .gap-xl-0 { - gap: 0 !important - } - - .gap-xl-1 { - gap: .25rem !important - } - - .gap-xl-2 { - gap: .5rem !important - } - - .gap-xl-3 { - gap: 1rem !important - } - - .gap-xl-4 { - gap: 1.5rem !important - } - - .gap-xl-5 { - gap: 3rem !important - } - - .text-xl-start { - text-align: left !important - } - - .text-xl-end { - text-align: right !important - } - - .text-xl-center { - text-align: center !important - } -} - -@media (min-width: 1400px) { - .float-xxl-start { - float: left !important - } - - .float-xxl-end { - float: right !important - } - - .float-xxl-none { - float: none !important - } - - .d-xxl-inline { - display: inline !important - } - - .d-xxl-inline-block { - display: inline-block !important - } - - .d-xxl-block { - display: block !important - } - - .d-xxl-grid { - display: grid !important - } - - .d-xxl-table { - display: table !important - } - - .d-xxl-table-row { - display: table-row !important - } - - .d-xxl-table-cell { - display: table-cell !important - } - - .d-xxl-flex { - display: flex !important - } - - .d-xxl-inline-flex { - display: inline-flex !important - } - - .d-xxl-none { - display: none !important - } - - .flex-xxl-fill { - flex: 1 1 auto !important - } - - .flex-xxl-row { - flex-direction: row !important - } - - .flex-xxl-column { - flex-direction: column !important - } - - .flex-xxl-row-reverse { - flex-direction: row-reverse !important - } - - .flex-xxl-column-reverse { - flex-direction: column-reverse !important - } - - .flex-xxl-grow-0 { - flex-grow: 0 !important - } - - .flex-xxl-grow-1 { - flex-grow: 1 !important - } - - .flex-xxl-shrink-0 { - flex-shrink: 0 !important - } - - .flex-xxl-shrink-1 { - flex-shrink: 1 !important - } - - .flex-xxl-wrap { - flex-wrap: wrap !important - } - - .flex-xxl-nowrap { - flex-wrap: nowrap !important - } - - .flex-xxl-wrap-reverse { - flex-wrap: wrap-reverse !important - } - - .justify-content-xxl-start { - justify-content: flex-start !important - } - - .justify-content-xxl-end { - justify-content: flex-end !important - } - - .justify-content-xxl-center { - justify-content: center !important - } - - .justify-content-xxl-between { - justify-content: space-between !important - } - - .justify-content-xxl-around { - justify-content: space-around !important - } - - .justify-content-xxl-evenly { - justify-content: space-evenly !important - } - - .align-items-xxl-start { - align-items: flex-start !important - } - - .align-items-xxl-end { - align-items: flex-end !important - } - - .align-items-xxl-center { - align-items: center !important - } - - .align-items-xxl-baseline { - align-items: baseline !important - } - - .align-items-xxl-stretch { - align-items: stretch !important - } - - .align-content-xxl-start { - align-content: flex-start !important - } - - .align-content-xxl-end { - align-content: flex-end !important - } - - .align-content-xxl-center { - align-content: center !important - } - - .align-content-xxl-between { - align-content: space-between !important - } - - .align-content-xxl-around { - align-content: space-around !important - } - - .align-content-xxl-stretch { - align-content: stretch !important - } - - .align-self-xxl-auto { - align-self: auto !important - } - - .align-self-xxl-start { - align-self: flex-start !important - } - - .align-self-xxl-end { - align-self: flex-end !important - } - - .align-self-xxl-center { - align-self: center !important - } - - .align-self-xxl-baseline { - align-self: baseline !important - } - - .align-self-xxl-stretch { - align-self: stretch !important - } - - .order-xxl-first { - order: -1 !important - } - - .order-xxl-0 { - order: 0 !important - } - - .order-xxl-1 { - order: 1 !important - } - - .order-xxl-2 { - order: 2 !important - } - - .order-xxl-3 { - order: 3 !important - } - - .order-xxl-4 { - order: 4 !important - } - - .order-xxl-5 { - order: 5 !important - } - - .order-xxl-last { - order: 6 !important - } - - .m-xxl-0 { - margin: 0 !important - } - - .m-xxl-1 { - margin: .25rem !important - } - - .m-xxl-2 { - margin: .5rem !important - } - - .m-xxl-3 { - margin: 1rem !important - } - - .m-xxl-4 { - margin: 1.5rem !important - } - - .m-xxl-5 { - margin: 3rem !important - } - - .m-xxl-auto { - margin: auto !important - } - - .mx-xxl-0 { - margin-right: 0 !important; - margin-left: 0 !important - } - - .mx-xxl-1 { - margin-right: .25rem !important; - margin-left: .25rem !important - } - - .mx-xxl-2 { - margin-right: .5rem !important; - margin-left: .5rem !important - } - - .mx-xxl-3 { - margin-right: 1rem !important; - margin-left: 1rem !important - } - - .mx-xxl-4 { - margin-right: 1.5rem !important; - margin-left: 1.5rem !important - } - - .mx-xxl-5 { - margin-right: 3rem !important; - margin-left: 3rem !important - } - - .mx-xxl-auto { - margin-right: auto !important; - margin-left: auto !important - } - - .my-xxl-0 { - margin-top: 0 !important; - margin-bottom: 0 !important - } - - .my-xxl-1 { - margin-top: .25rem !important; - margin-bottom: .25rem !important - } - - .my-xxl-2 { - margin-top: .5rem !important; - margin-bottom: .5rem !important - } - - .my-xxl-3 { - margin-top: 1rem !important; - margin-bottom: 1rem !important - } - - .my-xxl-4 { - margin-top: 1.5rem !important; - margin-bottom: 1.5rem !important - } - - .my-xxl-5 { - margin-top: 3rem !important; - margin-bottom: 3rem !important - } - - .my-xxl-auto { - margin-top: auto !important; - margin-bottom: auto !important - } - - .mt-xxl-0 { - margin-top: 0 !important - } - - .mt-xxl-1 { - margin-top: .25rem !important - } - - .mt-xxl-2 { - margin-top: .5rem !important - } - - .mt-xxl-3 { - margin-top: 1rem !important - } - - .mt-xxl-4 { - margin-top: 1.5rem !important - } - - .mt-xxl-5 { - margin-top: 3rem !important - } - - .mt-xxl-auto { - margin-top: auto !important - } - - .me-xxl-0 { - margin-right: 0 !important - } - - .me-xxl-1 { - margin-right: .25rem !important - } - - .me-xxl-2 { - margin-right: .5rem !important - } - - .me-xxl-3 { - margin-right: 1rem !important - } - - .me-xxl-4 { - margin-right: 1.5rem !important - } - - .me-xxl-5 { - margin-right: 3rem !important - } - - .me-xxl-auto { - margin-right: auto !important - } - - .mb-xxl-0 { - margin-bottom: 0 !important - } - - .mb-xxl-1 { - margin-bottom: .25rem !important - } - - .mb-xxl-2 { - margin-bottom: .5rem !important - } - - .mb-xxl-3 { - margin-bottom: 1rem !important - } - - .mb-xxl-4 { - margin-bottom: 1.5rem !important - } - - .mb-xxl-5 { - margin-bottom: 3rem !important - } - - .mb-xxl-auto { - margin-bottom: auto !important - } - - .ms-xxl-0 { - margin-left: 0 !important - } - - .ms-xxl-1 { - margin-left: .25rem !important - } - - .ms-xxl-2 { - margin-left: .5rem !important - } - - .ms-xxl-3 { - margin-left: 1rem !important - } - - .ms-xxl-4 { - margin-left: 1.5rem !important - } - - .ms-xxl-5 { - margin-left: 3rem !important - } - - .ms-xxl-auto { - margin-left: auto !important - } - - .p-xxl-0 { - padding: 0 !important - } - - .p-xxl-1 { - padding: .25rem !important - } - - .p-xxl-2 { - padding: .5rem !important - } - - .p-xxl-3 { - padding: 1rem !important - } - - .p-xxl-4 { - padding: 1.5rem !important - } - - .p-xxl-5 { - padding: 3rem !important - } - - .px-xxl-0 { - padding-right: 0 !important; - padding-left: 0 !important - } - - .px-xxl-1 { - padding-right: .25rem !important; - padding-left: .25rem !important - } - - .px-xxl-2 { - padding-right: .5rem !important; - padding-left: .5rem !important - } - - .px-xxl-3 { - padding-right: 1rem !important; - padding-left: 1rem !important - } - - .px-xxl-4 { - padding-right: 1.5rem !important; - padding-left: 1.5rem !important - } - - .px-xxl-5 { - padding-right: 3rem !important; - padding-left: 3rem !important - } - - .py-xxl-0 { - padding-top: 0 !important; - padding-bottom: 0 !important - } - - .py-xxl-1 { - padding-top: .25rem !important; - padding-bottom: .25rem !important - } - - .py-xxl-2 { - padding-top: .5rem !important; - padding-bottom: .5rem !important - } - - .py-xxl-3 { - padding-top: 1rem !important; - padding-bottom: 1rem !important - } - - .py-xxl-4 { - padding-top: 1.5rem !important; - padding-bottom: 1.5rem !important - } - - .py-xxl-5 { - padding-top: 3rem !important; - padding-bottom: 3rem !important - } - - .pt-xxl-0 { - padding-top: 0 !important - } - - .pt-xxl-1 { - padding-top: .25rem !important - } - - .pt-xxl-2 { - padding-top: .5rem !important - } - - .pt-xxl-3 { - padding-top: 1rem !important - } - - .pt-xxl-4 { - padding-top: 1.5rem !important - } - - .pt-xxl-5 { - padding-top: 3rem !important - } - - .pe-xxl-0 { - padding-right: 0 !important - } - - .pe-xxl-1 { - padding-right: .25rem !important - } - - .pe-xxl-2 { - padding-right: .5rem !important - } - - .pe-xxl-3 { - padding-right: 1rem !important - } - - .pe-xxl-4 { - padding-right: 1.5rem !important - } - - .pe-xxl-5 { - padding-right: 3rem !important - } - - .pb-xxl-0 { - padding-bottom: 0 !important - } - - .pb-xxl-1 { - padding-bottom: .25rem !important - } - - .pb-xxl-2 { - padding-bottom: .5rem !important - } - - .pb-xxl-3 { - padding-bottom: 1rem !important - } - - .pb-xxl-4 { - padding-bottom: 1.5rem !important - } - - .pb-xxl-5 { - padding-bottom: 3rem !important - } - - .ps-xxl-0 { - padding-left: 0 !important - } - - .ps-xxl-1 { - padding-left: .25rem !important - } - - .ps-xxl-2 { - padding-left: .5rem !important - } - - .ps-xxl-3 { - padding-left: 1rem !important - } - - .ps-xxl-4 { - padding-left: 1.5rem !important - } - - .ps-xxl-5 { - padding-left: 3rem !important - } - - .gap-xxl-0 { - gap: 0 !important - } - - .gap-xxl-1 { - gap: .25rem !important - } - - .gap-xxl-2 { - gap: .5rem !important - } - - .gap-xxl-3 { - gap: 1rem !important - } - - .gap-xxl-4 { - gap: 1.5rem !important - } - - .gap-xxl-5 { - gap: 3rem !important - } - - .text-xxl-start { - text-align: left !important - } - - .text-xxl-end { - text-align: right !important - } - - .text-xxl-center { - text-align: center !important - } -} - -.bg-default { - color: #fff -} - -.bg-primary { - color: #fff -} - -.bg-secondary { - color: #fff -} - -.bg-success { - color: #fff -} - -.bg-info { - color: #fff -} - -.bg-warning { - color: #fff -} - -.bg-danger { - color: #fff -} - -.bg-light { - color: #000 -} - -.bg-dark { - color: #fff -} - -@media (min-width: 1200px) { - .fs-1 { - font-size: 2.5rem !important - } - - .fs-2 { - font-size: 2rem !important - } - - .fs-3 { - font-size: 1.75rem !important - } - - .fs-4 { - font-size: 1.5rem !important - } -} - -@media print { - .d-print-inline { - display: inline !important - } - - .d-print-inline-block { - display: inline-block !important - } - - .d-print-block { - display: block !important - } - - .d-print-grid { - display: grid !important - } - - .d-print-table { - display: table !important - } - - .d-print-table-row { - display: table-row !important - } - - .d-print-table-cell { - display: table-cell !important - } - - .d-print-flex { - display: flex !important - } - - .d-print-inline-flex { - display: inline-flex !important - } - - .d-print-none { - display: none !important - } -} - -.table th[align=left] { - text-align: left -} - -.table th[align=right] { - text-align: right -} - -.table th[align=center] { - text-align: center -} - -.well { - display: block; - background-color: rgba(248, 245, 240, 0.25); - color: #3e3f3a; - padding: 1rem; - border-radius: .375rem -} - -.well-lg { - padding: 1.5rem; - border-radius: .5rem -} - -.well-sm { - padding: 0.5rem; - border-radius: .25rem -} - -.draggable .well { - background-color: #fdfdfb -} - -.dropdown-menu>li.active>a { - color: #8e8c84; - text-decoration: none; - background-color: #f8f5f0; - background-image: var(--bs-gradient) -} - -.navbar:not(.fixed-bottom):not(.navbar-fixed-bottom):not(.navbar-fixed-bottom)+div>.tab-content>.tab-pane { - --bslib-navbar-margin: 20px; - margin-top: var(--bslib-navbar-margin) -} - -ul.nav.navbar-nav { - flex: 1; - -webkit-flex: 1 -} - -ul.nav.navbar-nav.navbar-right { - flex: unset; - -webkit-flex: unset; - display: flex; - display: -webkit-flex; - justify-content: flex-end; - -webkit-justify-content: flex-end -} - -.navbar.navbar-default { - background-color: #3e3f3a !important -} - -.navbar.navbar-inverse { - background-color: #93c54b !important -} - -.navbar-toggle>.icon-bar { - display: none -} - -@media (max-width: 575.98px) { - .navbar-header { - width: 100% - } - - .navbar-header .navbar-toggle { - float: right - } -} - -.nav-tabs>li.active>a { - color: #495057; - background-color: #fff; - border-color: #dfd7ca #dfd7ca #fff -} - -.nav-pills>li.active>a { - color: #8e8c84; - background-color: #f8f5f0 -} - -.nav-stacked { - flex-direction: column; - -webkit-flex-direction: column -} - -.progress-bar-default { - background-color: #8e8c84; - color: #fff -} - -.progress-bar-primary { - background-color: #325d88; - color: #fff -} - -.progress-bar-secondary { - background-color: #8e8c84; - color: #fff -} - -.progress-bar-success { - background-color: #93c54b; - color: #fff -} - -.progress-bar-info { - background-color: #29abe0; - color: #fff -} - -.progress-bar-warning { - background-color: #f47c3c; - color: #fff -} - -.progress-bar-danger { - background-color: #d9534f; - color: #fff -} - -.progress-bar-light { - background-color: #f8f5f0; - color: #000 -} - -.progress-bar-dark { - background-color: #3e3f3a; - color: #fff -} - -@font-face { - font-family: 'Glyphicons Halflings'; - src: url("fonts/bootstrap/glyphicons-halflings-regular.eot"); - src: url("fonts/bootstrap/glyphicons-halflings-regular.eot?#iefix") format("embedded-opentype"), url("fonts/bootstrap/glyphicons-halflings-regular.woff2") format("woff2"), url("fonts/bootstrap/glyphicons-halflings-regular.woff") format("woff"), url("fonts/bootstrap/glyphicons-halflings-regular.ttf") format("truetype"), url("fonts/bootstrap/glyphicons-halflings-regular.svg#glyphicons_halflingsregular") format("svg") -} - -.glyphicon { - position: relative; - top: 1px; - display: inline-block; - font-family: 'Glyphicons Halflings'; - font-style: normal; - font-weight: normal; - line-height: 1; - -webkit-font-smoothing: antialiased; - -moz-osx-font-smoothing: grayscale -} - -.glyphicon-asterisk:before { - content: "\2a" -} - -.glyphicon-plus:before { - content: "\2b" -} - -.glyphicon-euro:before, -.glyphicon-eur:before { - content: "\20ac" -} - -.glyphicon-minus:before { - content: "\2212" -} - -.glyphicon-cloud:before { - content: "\2601" -} - -.glyphicon-envelope:before { - content: "\2709" -} - -.glyphicon-pencil:before { - content: "\270f" -} - -.glyphicon-glass:before { - content: "\e001" -} - -.glyphicon-music:before { - content: "\e002" -} - -.glyphicon-search:before { - content: "\e003" -} - -.glyphicon-heart:before { - content: "\e005" -} - -.glyphicon-star:before { - content: "\e006" -} - -.glyphicon-star-empty:before { - content: "\e007" -} - -.glyphicon-user:before { - content: "\e008" -} - -.glyphicon-film:before { - content: "\e009" -} - -.glyphicon-th-large:before { - content: "\e010" -} - -.glyphicon-th:before { - content: "\e011" -} - -.glyphicon-th-list:before { - content: "\e012" -} - -.glyphicon-ok:before { - content: "\e013" -} - -.glyphicon-remove:before { - content: "\e014" -} - -.glyphicon-zoom-in:before { - content: "\e015" -} - -.glyphicon-zoom-out:before { - content: "\e016" -} - -.glyphicon-off:before { - content: "\e017" -} - -.glyphicon-signal:before { - content: "\e018" -} - -.glyphicon-cog:before { - content: "\e019" -} - -.glyphicon-trash:before { - content: "\e020" -} - -.glyphicon-home:before { - content: "\e021" -} - -.glyphicon-file:before { - content: "\e022" -} - -.glyphicon-time:before { - content: "\e023" -} - -.glyphicon-road:before { - content: "\e024" -} - -.glyphicon-download-alt:before { - content: "\e025" -} - -.glyphicon-download:before { - content: "\e026" -} - -.glyphicon-upload:before { - content: "\e027" -} - -.glyphicon-inbox:before { - content: "\e028" -} - -.glyphicon-play-circle:before { - content: "\e029" -} - -.glyphicon-repeat:before { - content: "\e030" -} - -.glyphicon-refresh:before { - content: "\e031" -} - -.glyphicon-list-alt:before { - content: "\e032" -} - -.glyphicon-lock:before { - content: "\e033" -} - -.glyphicon-flag:before { - content: "\e034" -} - -.glyphicon-headphones:before { - content: "\e035" -} - -.glyphicon-volume-off:before { - content: "\e036" -} - -.glyphicon-volume-down:before { - content: "\e037" -} - -.glyphicon-volume-up:before { - content: "\e038" -} - -.glyphicon-qrcode:before { - content: "\e039" -} - -.glyphicon-barcode:before { - content: "\e040" -} - -.glyphicon-tag:before { - content: "\e041" -} - -.glyphicon-tags:before { - content: "\e042" -} - -.glyphicon-book:before { - content: "\e043" -} - -.glyphicon-bookmark:before { - content: "\e044" -} - -.glyphicon-print:before { - content: "\e045" -} - -.glyphicon-camera:before { - content: "\e046" -} - -.glyphicon-font:before { - content: "\e047" -} - -.glyphicon-bold:before { - content: "\e048" -} - -.glyphicon-italic:before { - content: "\e049" -} - -.glyphicon-text-height:before { - content: "\e050" -} - -.glyphicon-text-width:before { - content: "\e051" -} - -.glyphicon-align-left:before { - content: "\e052" -} - -.glyphicon-align-center:before { - content: "\e053" -} - -.glyphicon-align-right:before { - content: "\e054" -} - -.glyphicon-align-justify:before { - content: "\e055" -} - -.glyphicon-list:before { - content: "\e056" -} - -.glyphicon-indent-left:before { - content: "\e057" -} - -.glyphicon-indent-right:before { - content: "\e058" -} - -.glyphicon-facetime-video:before { - content: "\e059" -} - -.glyphicon-picture:before { - content: "\e060" -} - -.glyphicon-map-marker:before { - content: "\e062" -} - -.glyphicon-adjust:before { - content: "\e063" -} - -.glyphicon-tint:before { - content: "\e064" -} - -.glyphicon-edit:before { - content: "\e065" -} - -.glyphicon-share:before { - content: "\e066" -} - -.glyphicon-check:before { - content: "\e067" -} - -.glyphicon-move:before { - content: "\e068" -} - -.glyphicon-step-backward:before { - content: "\e069" -} - -.glyphicon-fast-backward:before { - content: "\e070" -} - -.glyphicon-backward:before { - content: "\e071" -} - -.glyphicon-play:before { - content: "\e072" -} - -.glyphicon-pause:before { - content: "\e073" -} - -.glyphicon-stop:before { - content: "\e074" -} - -.glyphicon-forward:before { - content: "\e075" -} - -.glyphicon-fast-forward:before { - content: "\e076" -} - -.glyphicon-step-forward:before { - content: "\e077" -} - -.glyphicon-eject:before { - content: "\e078" -} - -.glyphicon-chevron-left:before { - content: "\e079" -} - -.glyphicon-chevron-right:before { - content: "\e080" -} - -.glyphicon-plus-sign:before { - content: "\e081" -} - -.glyphicon-minus-sign:before { - content: "\e082" -} - -.glyphicon-remove-sign:before { - content: "\e083" -} - -.glyphicon-ok-sign:before { - content: "\e084" -} - -.glyphicon-question-sign:before { - content: "\e085" -} - -.glyphicon-info-sign:before { - content: "\e086" -} - -.glyphicon-screenshot:before { - content: "\e087" -} - -.glyphicon-remove-circle:before { - content: "\e088" -} - -.glyphicon-ok-circle:before { - content: "\e089" -} - -.glyphicon-ban-circle:before { - content: "\e090" -} - -.glyphicon-arrow-left:before { - content: "\e091" -} - -.glyphicon-arrow-right:before { - content: "\e092" -} - -.glyphicon-arrow-up:before { - content: "\e093" -} - -.glyphicon-arrow-down:before { - content: "\e094" -} - -.glyphicon-share-alt:before { - content: "\e095" -} - -.glyphicon-resize-full:before { - content: "\e096" -} - -.glyphicon-resize-small:before { - content: "\e097" -} - -.glyphicon-exclamation-sign:before { - content: "\e101" -} - -.glyphicon-gift:before { - content: "\e102" -} - -.glyphicon-leaf:before { - content: "\e103" -} - -.glyphicon-fire:before { - content: "\e104" -} - -.glyphicon-eye-open:before { - content: "\e105" -} - -.glyphicon-eye-close:before { - content: "\e106" -} - -.glyphicon-warning-sign:before { - content: "\e107" -} - -.glyphicon-plane:before { - content: "\e108" -} - -.glyphicon-calendar:before { - content: "\e109" -} - -.glyphicon-random:before { - content: "\e110" -} - -.glyphicon-comment:before { - content: "\e111" -} - -.glyphicon-magnet:before { - content: "\e112" -} - -.glyphicon-chevron-up:before { - content: "\e113" -} - -.glyphicon-chevron-down:before { - content: "\e114" -} - -.glyphicon-retweet:before { - content: "\e115" -} - -.glyphicon-shopping-cart:before { - content: "\e116" -} - -.glyphicon-folder-close:before { - content: "\e117" -} - -.glyphicon-folder-open:before { - content: "\e118" -} - -.glyphicon-resize-vertical:before { - content: "\e119" -} - -.glyphicon-resize-horizontal:before { - content: "\e120" -} - -.glyphicon-hdd:before { - content: "\e121" -} - -.glyphicon-bullhorn:before { - content: "\e122" -} - -.glyphicon-bell:before { - content: "\e123" -} - -.glyphicon-certificate:before { - content: "\e124" -} - -.glyphicon-thumbs-up:before { - content: "\e125" -} - -.glyphicon-thumbs-down:before { - content: "\e126" -} - -.glyphicon-hand-right:before { - content: "\e127" -} - -.glyphicon-hand-left:before { - content: "\e128" -} - -.glyphicon-hand-up:before { - content: "\e129" -} - -.glyphicon-hand-down:before { - content: "\e130" -} - -.glyphicon-circle-arrow-right:before { - content: "\e131" -} - -.glyphicon-circle-arrow-left:before { - content: "\e132" -} - -.glyphicon-circle-arrow-up:before { - content: "\e133" -} - -.glyphicon-circle-arrow-down:before { - content: "\e134" -} - -.glyphicon-globe:before { - content: "\e135" -} - -.glyphicon-wrench:before { - content: "\e136" -} - -.glyphicon-tasks:before { - content: "\e137" -} - -.glyphicon-filter:before { - content: "\e138" -} - -.glyphicon-briefcase:before { - content: "\e139" -} - -.glyphicon-fullscreen:before { - content: "\e140" -} - -.glyphicon-dashboard:before { - content: "\e141" -} - -.glyphicon-paperclip:before { - content: "\e142" -} - -.glyphicon-heart-empty:before { - content: "\e143" -} - -.glyphicon-link:before { - content: "\e144" -} - -.glyphicon-phone:before { - content: "\e145" -} - -.glyphicon-pushpin:before { - content: "\e146" -} - -.glyphicon-usd:before { - content: "\e148" -} - -.glyphicon-gbp:before { - content: "\e149" -} - -.glyphicon-sort:before { - content: "\e150" -} - -.glyphicon-sort-by-alphabet:before { - content: "\e151" -} - -.glyphicon-sort-by-alphabet-alt:before { - content: "\e152" -} - -.glyphicon-sort-by-order:before { - content: "\e153" -} - -.glyphicon-sort-by-order-alt:before { - content: "\e154" -} - -.glyphicon-sort-by-attributes:before { - content: "\e155" -} - -.glyphicon-sort-by-attributes-alt:before { - content: "\e156" -} - -.glyphicon-unchecked:before { - content: "\e157" -} - -.glyphicon-expand:before { - content: "\e158" -} - -.glyphicon-collapse-down:before { - content: "\e159" -} - -.glyphicon-collapse-up:before { - content: "\e160" -} - -.glyphicon-log-in:before { - content: "\e161" -} - -.glyphicon-flash:before { - content: "\e162" -} - -.glyphicon-log-out:before { - content: "\e163" -} - -.glyphicon-new-window:before { - content: "\e164" -} - -.glyphicon-record:before { - content: "\e165" -} - -.glyphicon-save:before { - content: "\e166" -} - -.glyphicon-open:before { - content: "\e167" -} - -.glyphicon-saved:before { - content: "\e168" -} - -.glyphicon-import:before { - content: "\e169" -} - -.glyphicon-export:before { - content: "\e170" -} - -.glyphicon-send:before { - content: "\e171" -} - -.glyphicon-floppy-disk:before { - content: "\e172" -} - -.glyphicon-floppy-saved:before { - content: "\e173" -} - -.glyphicon-floppy-remove:before { - content: "\e174" -} - -.glyphicon-floppy-save:before { - content: "\e175" -} - -.glyphicon-floppy-open:before { - content: "\e176" -} - -.glyphicon-credit-card:before { - content: "\e177" -} - -.glyphicon-transfer:before { - content: "\e178" -} - -.glyphicon-cutlery:before { - content: "\e179" -} - -.glyphicon-header:before { - content: "\e180" -} - -.glyphicon-compressed:before { - content: "\e181" -} - -.glyphicon-earphone:before { - content: "\e182" -} - -.glyphicon-phone-alt:before { - content: "\e183" -} - -.glyphicon-tower:before { - content: "\e184" -} - -.glyphicon-stats:before { - content: "\e185" -} - -.glyphicon-sd-video:before { - content: "\e186" -} - -.glyphicon-hd-video:before { - content: "\e187" -} - -.glyphicon-subtitles:before { - content: "\e188" -} - -.glyphicon-sound-stereo:before { - content: "\e189" -} - -.glyphicon-sound-dolby:before { - content: "\e190" -} - -.glyphicon-sound-5-1:before { - content: "\e191" -} - -.glyphicon-sound-6-1:before { - content: "\e192" -} - -.glyphicon-sound-7-1:before { - content: "\e193" -} - -.glyphicon-copyright-mark:before { - content: "\e194" -} - -.glyphicon-registration-mark:before { - content: "\e195" -} - -.glyphicon-cloud-download:before { - content: "\e197" -} - -.glyphicon-cloud-upload:before { - content: "\e198" -} - -.glyphicon-tree-conifer:before { - content: "\e199" -} - -.glyphicon-tree-deciduous:before { - content: "\e200" -} - -.glyphicon-cd:before { - content: "\e201" -} - -.glyphicon-save-file:before { - content: "\e202" -} - -.glyphicon-open-file:before { - content: "\e203" -} - -.glyphicon-level-up:before { - content: "\e204" -} - -.glyphicon-copy:before { - content: "\e205" -} - -.glyphicon-paste:before { - content: "\e206" -} - -.glyphicon-alert:before { - content: "\e209" -} - -.glyphicon-equalizer:before { - content: "\e210" -} - -.glyphicon-king:before { - content: "\e211" -} - -.glyphicon-queen:before { - content: "\e212" -} - -.glyphicon-pawn:before { - content: "\e213" -} - -.glyphicon-bishop:before { - content: "\e214" -} - -.glyphicon-knight:before { - content: "\e215" -} - -.glyphicon-baby-formula:before { - content: "\e216" -} - -.glyphicon-tent:before { - content: "\26fa" -} - -.glyphicon-blackboard:before { - content: "\e218" -} - -.glyphicon-bed:before { - content: "\e219" -} - -.glyphicon-apple:before { - content: "\f8ff" -} - -.glyphicon-erase:before { - content: "\e221" -} - -.glyphicon-hourglass:before { - content: "\231b" -} - -.glyphicon-lamp:before { - content: "\e223" -} - -.glyphicon-duplicate:before { - content: "\e224" -} - -.glyphicon-piggy-bank:before { - content: "\e225" -} - -.glyphicon-scissors:before { - content: "\e226" -} - -.glyphicon-bitcoin:before { - content: "\e227" -} - -.glyphicon-btc:before { - content: "\e227" -} - -.glyphicon-xbt:before { - content: "\e227" -} - -.glyphicon-yen:before { - content: "\00a5" -} - -.glyphicon-jpy:before { - content: "\00a5" -} - -.glyphicon-ruble:before { - content: "\20bd" -} - -.glyphicon-rub:before { - content: "\20bd" -} - -.glyphicon-scale:before { - content: "\e230" -} - -.glyphicon-ice-lolly:before { - content: "\e231" -} - -.glyphicon-ice-lolly-tasted:before { - content: "\e232" -} - -.glyphicon-education:before { - content: "\e233" -} - -.glyphicon-option-horizontal:before { - content: "\e234" -} - -.glyphicon-option-vertical:before { - content: "\e235" -} - -.glyphicon-menu-hamburger:before { - content: "\e236" -} - -.glyphicon-modal-window:before { - content: "\e237" -} - -.glyphicon-oil:before { - content: "\e238" -} - -.glyphicon-grain:before { - content: "\e239" -} - -.glyphicon-sunglasses:before { - content: "\e240" -} - -.glyphicon-text-size:before { - content: "\e241" -} - -.glyphicon-text-color:before { - content: "\e242" -} - -.glyphicon-text-background:before { - content: "\e243" -} - -.glyphicon-object-align-top:before { - content: "\e244" -} - -.glyphicon-object-align-bottom:before { - content: "\e245" -} - -.glyphicon-object-align-horizontal:before { - content: "\e246" -} - -.glyphicon-object-align-left:before { - content: "\e247" -} - -.glyphicon-object-align-vertical:before { - content: "\e248" -} - -.glyphicon-object-align-right:before { - content: "\e249" -} - -.glyphicon-triangle-right:before { - content: "\e250" -} - -.glyphicon-triangle-left:before { - content: "\e251" -} - -.glyphicon-triangle-bottom:before { - content: "\e252" -} - -.glyphicon-triangle-top:before { - content: "\e253" -} - -.glyphicon-console:before { - content: "\e254" -} - -.glyphicon-superscript:before { - content: "\e255" -} - -.glyphicon-subscript:before { - content: "\e256" -} - -.glyphicon-menu-left:before { - content: "\e257" -} - -.glyphicon-menu-right:before { - content: "\e258" -} - -.glyphicon-menu-down:before { - content: "\e259" -} - -.glyphicon-menu-up:before { - content: "\e260" -} - -.form-group { - margin-bottom: 1rem -} - -.input-daterange .input-group-addon.input-group-prepend.input-group-append { - padding: inherit; - line-height: inherit; - text-shadow: inherit; - border-width: 0 -} - -.input-daterange .input-group-addon.input-group-prepend.input-group-append .input-group-text { - border-radius: 0 -} - -pre.shiny-code { - padding: 0.5rem -} - -.section.level1, -.section.level2, -.section.level3, -section.level1, -section.level2, -section.level3 { - margin-top: 1.5rem -} - -.section.level4, -.section.level5, -.section.level6, -section.level4, -section.level5, -section.level6 { - margin-top: 1rem -} - -.accordion .accordion-icon:not(:empty) { - margin-right: 0.25rem; - display: flex -} - -.accordion .accordion-button:not(.collapsed) { - box-shadow: none -} - -.accordion .accordion-button:not(.collapsed):focus { - box-shadow: var(--bs-accordion-btn-focus-box-shadow) -} - -.bslib-card .card-body+.card-body { - padding-top: 0 -} - -.bslib-card .card-body { - overflow: auto -} - -.bslib-card .card-body p { - margin-top: 0 -} - -.bslib-card .card-body p:last-child { - margin-bottom: 0 -} - -.bslib-card .card-body { - max-height: var(--bslib-card-body-max-height, none) -} - -.bslib-card.bslib-full-screen>.card-body { - max-height: var(--bslib-card-body-max-height-full-screen, none) -} - -.bslib-card .card-header .form-group { - margin-bottom: 0 -} - -.bslib-card .card-header .selectize-control { - margin-bottom: 0 -} - -.bslib-card .card-header .selectize-control .item { - margin-right: 1.15rem -} - -.bslib-card .card-footer { - margin-top: auto -} - -.bslib-card .bslib-navs-card-title { - display: flex; - flex-wrap: wrap; - justify-content: space-between; - align-items: center -} - -.bslib-card .bslib-navs-card-title .nav { - margin-left: auto -} - -.bslib-card .bslib-sidebar-layout:not([data-bslib-sidebar-border="true"]) { - border: none -} - -.bslib-card .bslib-sidebar-layout:not([data-bslib-sidebar-border-radius="true"]) { - border-top-left-radius: 0; - border-top-right-radius: 0 -} - -.bslib-full-screen { - position: fixed; - inset: 3.5rem 1rem 1rem; - height: auto !important; - max-height: none !important; - width: auto !important; - z-index: 1070 -} - -.bslib-full-screen-enter { - display: none; - position: absolute; - bottom: 1px; - right: 3px; - margin: 0.5rem; - padding: 0.55rem !important; - font-size: .8rem; - cursor: pointer; - opacity: .6; - color: rgba(var(--bs-body-bg-rgb), 1); - z-index: 1070 -} - -.bslib-full-screen-enter:hover { - opacity: 1 -} - -.card:hover:not(.bslib-full-screen) .bslib-full-screen-enter, -.well:hover:not(.bslib-full-screen) .bslib-full-screen-enter { - display: block -} - -@media (max-width: 575.98px) { - .bslib-full-screen-enter { - display: none !important - } -} - -.bslib-full-screen-exit { - position: relative; - top: 1.35rem; - font-size: 0.9rem; - cursor: pointer; - text-decoration: none; - display: flex; - float: right; - margin-right: 2.15rem; - align-items: center; - color: rgba(var(--bs-body-bg-rgb), 0.8) -} - -.bslib-full-screen-exit:hover { - color: rgba(var(--bs-body-bg-rgb), 1) -} - -.bslib-full-screen-exit svg { - margin-left: 0.5rem; - font-size: 1.5rem -} - -#bslib-full-screen-overlay { - position: fixed; - inset: 0; - background-color: rgba(var(--bs-body-color-rgb), 0.6); - z-index: 1069 -} - -.tab-content>.tab-pane.html-fill-container { - display: none -} - -.tab-content>.active.html-fill-container { - display: flex -} - -.tab-content.html-fill-container { - padding: 0 -} - -.bslib-page-fill { - width: 100%; - height: 100%; - margin: 0; - padding: 1rem; - gap: 1rem -} - -@media (max-width: 575.98px) { - .bslib-page-fill { - height: var(--bslib-page-fill-mobile-height, auto) - } -} - -.bslib-column-wrap { - display: grid !important; - gap: 1rem; - height: var(--bslib-column-wrap-height) -} - -.bslib-column-wrap .card, -.bslib-column-wrap .well { - margin-bottom: 0 -} - -@media (max-width: 575.98px) { - .bslib-column-wrap { - grid-template-columns: 1fr !important; - height: var(--bslib-column-wrap-height-mobile) - } -} - -.bslib-sidebar-layout { - --bslib-sidebar-transition: grid-template-columns ease-in-out 0.5s; - --bslib-sidebar-border: var(--bs-card-border-width, 1px) solid var(--bs-card-border-color, rgba(223, 215, 202, 0.75)); - --bslib-sidebar-border-radius: var(--bs-border-radius); - --bslib-sidebar-vert-border: var(--bs-card-border-width, 1px) solid var(--bs-card-border-color, rgba(223, 215, 202, 0.75)); - --bslib-collapse-toggle-border: var(--bs-card-border-width, 1px) solid var(--bs-card-border-color, rgba(223, 215, 202, 0.75)); - --bslib-collapse-toggle-transform: 90deg; - --bslib-collapse-toggle-right-transform: -90deg; - display: grid !important; - grid-template-columns: Min(calc(100% - 1rem), var(--bslib-sidebar-width, 250px)) minmax(0, 1fr); - position: relative; - border: var(--bslib-sidebar-border); - border-radius: var(--bslib-sidebar-border-radius) -} - -.bslib-sidebar-layout[data-bslib-sidebar-border="false"] { - border: none -} - -.bslib-sidebar-layout[data-bslib-sidebar-border-radius="false"] { - border-radius: initial -} - -.bslib-sidebar-layout>.main, -.bslib-sidebar-layout>.sidebar { - grid-row: 1 / 2; - border-radius: inherit; - overflow: auto -} - -.bslib-sidebar-layout>.main { - grid-column: 2 / 3; - border-top-left-radius: 0; - border-bottom-left-radius: 0; - padding: 1.5rem -} - -.bslib-sidebar-layout>.sidebar { - grid-column: 1 / 2; - width: 100%; - height: 100%; - border-right: var(--bslib-sidebar-vert-border); - border-top-right-radius: 0; - border-bottom-right-radius: 0; - background-color: #f8f9fa; - color: #000 -} - -.bslib-sidebar-layout>.sidebar>.sidebar-content { - display: flex; - flex-direction: column; - padding: 1.5rem -} - -.bslib-sidebar-layout>.sidebar>.sidebar-content>:last-child:not(.sidebar-title) { - margin-bottom: 0 -} - -.bslib-sidebar-layout>.sidebar>.sidebar-content>.accordion { - margin-left: -1.5rem; - margin-right: -1.5rem -} - -.bslib-sidebar-layout>.sidebar>.sidebar-content>.accordion:first-child { - margin-top: -1.5rem -} - -.bslib-sidebar-layout>.sidebar>.sidebar-content>.accordion:last-child { - margin-bottom: -1.5rem -} - -.bslib-sidebar-layout>.sidebar>.sidebar-content>.accordion:not(:last-child) { - margin-bottom: 1rem -} - -.bslib-sidebar-layout>.sidebar>.sidebar-content>.accordion .accordion-body { - display: flex; - flex-direction: column -} - -.bslib-sidebar-layout>.sidebar>.sidebar-content>.accordion:not(:first-child) .accordion-item:first-child { - border-top: var(--bs-accordion-border-width) solid var(--bs-accordion-border-color) -} - -.bslib-sidebar-layout>.sidebar>.sidebar-content>.accordion:not(:last-child) .accordion-item:last-child { - border-bottom: var(--bs-accordion-border-width) solid var(--bs-accordion-border-color) -} - -.bslib-sidebar-layout>.sidebar>.sidebar-content>.sidebar-title+.accordion { - margin-top: calc(-1rem - var(--bs-card-border-width, 1px)) -} - -.bslib-sidebar-layout>.sidebar>.sidebar-content>.sidebar-title:has(+.accordion) { - border-bottom: none -} - -.bslib-sidebar-layout>.sidebar .shiny-input-container { - width: 100% -} - -.bslib-sidebar-layout>.collapse-toggle { - grid-row: 1 / 2; - grid-column: 1 / 2; - display: inline-flex; - align-items: center; - position: absolute; - right: -1rem; - bottom: calc(1.5rem + var(--bslib-sidebar-overlap-counter, 0) * calc(1rem + 1.5rem)); - border: var(--bslib-collapse-toggle-border); - border-left: none; - border-radius: 0 var(--bs-border-radius) var(--bs-border-radius) 0; - padding: 7px 0; - background-color: #f8f9fa; - color: #000 -} - -.bslib-sidebar-layout>.collapse-toggle>.collapse-icon { - opacity: 0.8; - width: 1rem; - height: 1rem; - transform: rotate(var(--bslib-collapse-toggle-transform)); - transition: transform ease-in-out 0.35s -} - -.bslib-sidebar-layout>.collapse-toggle:hover>.collapse-icon { - opacity: 1 -} - -.bslib-sidebar-layout .sidebar-title { - font-size: 1.25rem; - line-height: 1.25; - margin-top: 0; - margin-bottom: 1rem; - padding-bottom: 1rem; - border-bottom: var(--bslib-sidebar-border) -} - -.bslib-sidebar-layout.sidebar-right { - grid-template-columns: minmax(0, 1fr) Min(calc(100% - 1rem), var(--bslib-sidebar-width, 250px)) -} - -.bslib-sidebar-layout.sidebar-right>.main { - grid-column: 1 / 2; - border-top-right-radius: 0; - border-bottom-right-radius: 0; - border-top-left-radius: inherit; - border-bottom-left-radius: inherit -} - -.bslib-sidebar-layout.sidebar-right>.sidebar { - grid-column: 2 / 3; - border-right: none; - border-left: var(--bslib-sidebar-vert-border); - border-top-left-radius: 0; - border-bottom-left-radius: 0 -} - -.bslib-sidebar-layout.sidebar-right>.collapse-toggle { - grid-column: 2 / 3; - left: -1rem; - right: unset; - border-radius: var(--bs-border-radius) 0 0 var(--bs-border-radius); - border-right: none; - border-left: var(--bslib-collapse-toggle-border) -} - -.bslib-sidebar-layout.sidebar-right>.collapse-toggle>.collapse-icon { - transform: rotate(var(--bslib-collapse-toggle-right-transform)) -} - -.bslib-sidebar-layout.sidebar-collapsed { - --bslib-collapse-toggle-transform: -90deg; - --bslib-collapse-toggle-right-transform: 90deg; - --bslib-sidebar-vert-border: none; - grid-template-columns: 0 minmax(0, 1fr) -} - -.bslib-sidebar-layout.sidebar-collapsed.sidebar-right { - grid-template-columns: minmax(0, 1fr) 0 -} - -.bslib-sidebar-layout.sidebar-collapsed:not(.transitioning)>.sidebar>* { - display: none -} - -.bslib-sidebar-layout.sidebar-collapsed>.main { - border-radius: inherit -} - -.bslib-sidebar-layout.sidebar-collapsed>.collapse-toggle { - right: calc(-1rem - var(--bs-card-border-width, 1px)) -} - -.bslib-sidebar-layout.sidebar-collapsed.sidebar-right>.collapse-toggle { - left: calc(-1rem - var(--bs-card-border-width, 1px)); - right: unset -} - -@media (min-width: 576px) { - .bslib-sidebar-layout.transitioning>.sidebar>.sidebar-content { - display: none - } -} - -@media (max-width: 575.98px) { - - .bslib-sidebar-layout, - .bslib-sidebar-layout.sidebar-right { - --bslib-sidebar-vert-border: none; - --bslib-sidebar-horiz-border: var(--bs-card-border-width, 1px) solid var(--bs-card-border-color, rgba(223, 215, 202, 0.75)); - --bslib-collapse-toggle-transform: -180deg; - --bslib-collapse-toggle-right-transform: -180deg; - grid-template-columns: 1fr !important; - grid-template-rows: fit-content(var(--bslib-sidebar-max-height-mobile, auto)) minmax(0, 1fr) - } - - .bslib-sidebar-layout[data-sidebar-init-auto-collapse], - .bslib-sidebar-layout.sidebar-right[data-sidebar-init-auto-collapse] { - --bslib-sidebar-js-init-collapsed: true - } - - .bslib-sidebar-layout>.sidebar, - .bslib-sidebar-layout.sidebar-right>.sidebar { - grid-row: 1 / 2; - grid-column: 1 / 2; - width: 100%; - border: none; - border-bottom: var(--bslib-sidebar-horiz-border); - border-radius: 0 - } - - .bslib-sidebar-layout>.main, - .bslib-sidebar-layout.sidebar-right>.main { - grid-row: 2 / 3; - grid-column: 1 / 2; - border-top-left-radius: 0; - border-top-right-radius: 0; - border-bottom-right-radius: inherit; - border-bottom-left-radius: inherit - } - - .bslib-sidebar-layout>.collapse-toggle, - .bslib-sidebar-layout.sidebar-right>.collapse-toggle { - grid-row: 2 / 3; - grid-column: 1 / 2; - border-top: none !important; - border: var(--bslib-collapse-toggle-border); - border-radius: 0 0 var(--bs-border-radius) var(--bs-border-radius); - padding: 0 4px - } - - .bslib-sidebar-layout>.collapse-toggle, - .bslib-sidebar-layout.sidebar-right>.collapse-toggle, - .bslib-sidebar-layout.sidebar-right>.collapse-toggle, - .bslib-sidebar-layout.sidebar-right.sidebar-right>.collapse-toggle { - top: calc(-1 * var(--bs-card-border-width, 1px)) - } - - .bslib-sidebar-layout.sidebar-collapsed>.collapse-toggle, - .bslib-sidebar-layout.sidebar-right.sidebar-collapsed>.collapse-toggle, - .bslib-sidebar-layout.sidebar-right.sidebar-collapsed>.collapse-toggle, - .bslib-sidebar-layout.sidebar-right.sidebar-right.sidebar-collapsed>.collapse-toggle { - top: 0 - } - - .bslib-sidebar-layout>.collapse-toggle, - .bslib-sidebar-layout.sidebar-collapsed>.collapse-toggle, - .bslib-sidebar-layout.sidebar-right>.collapse-toggle, - .bslib-sidebar-layout.sidebar-right.sidebar-collapsed>.collapse-toggle, - .bslib-sidebar-layout.sidebar-right>.collapse-toggle, - .bslib-sidebar-layout.sidebar-right.sidebar-collapsed>.collapse-toggle, - .bslib-sidebar-layout.sidebar-right.sidebar-right>.collapse-toggle, - .bslib-sidebar-layout.sidebar-right.sidebar-right.sidebar-collapsed>.collapse-toggle { - right: calc(1.5rem + var(--bslib-sidebar-counter, 0) * calc(1rem + 1.5rem)); - bottom: initial; - left: initial - } - - .bslib-sidebar-layout.sidebar-collapsed, - .bslib-sidebar-layout.sidebar-right.sidebar-collapsed { - --bslib-collapse-toggle-transform: 0deg; - --bslib-collapse-toggle-right-transform: 0deg; - grid-template-rows: 0 minmax(0, 1fr) - } - - .bslib-sidebar-layout.sidebar-collapsed>.main, - .bslib-sidebar-layout.sidebar-right.sidebar-collapsed>.main { - border-top-left-radius: inherit; - border-top-right-radius: inherit - } - - .bslib-sidebar-layout.sidebar-collapsed>.sidebar, - .bslib-sidebar-layout.sidebar-right.sidebar-collapsed>.sidebar { - border-bottom: none - } -} - -.navbar+.container-fluid:has(>.tab-content>.tab-pane.active>.bslib-sidebar-layout), -.navbar+.container-sm:has(>.tab-content>.tab-pane.active>.bslib-sidebar-layout), -.navbar+.container-md:has(>.tab-content>.tab-pane.active>.bslib-sidebar-layout), -.navbar+.container-lg:has(>.tab-content>.tab-pane.active>.bslib-sidebar-layout), -.navbar+.container-xl:has(>.tab-content>.tab-pane.active>.bslib-sidebar-layout), -.navbar+.container-xxl:has(>.tab-content>.tab-pane.active>.bslib-sidebar-layout) { - padding-left: 0; - padding-right: 0 -} - -.navbar+.container-fluid>.tab-content>.tab-pane.active>.bslib-sidebar-layout:not([data-bslib-sidebar-border="true"]), -.navbar+.container-sm>.tab-content>.tab-pane.active>.bslib-sidebar-layout:not([data-bslib-sidebar-border="true"]), -.navbar+.container-md>.tab-content>.tab-pane.active>.bslib-sidebar-layout:not([data-bslib-sidebar-border="true"]), -.navbar+.container-lg>.tab-content>.tab-pane.active>.bslib-sidebar-layout:not([data-bslib-sidebar-border="true"]), -.navbar+.container-xl>.tab-content>.tab-pane.active>.bslib-sidebar-layout:not([data-bslib-sidebar-border="true"]), -.navbar+.container-xxl>.tab-content>.tab-pane.active>.bslib-sidebar-layout:not([data-bslib-sidebar-border="true"]) { - border: none -} - -.navbar+.container-fluid>.tab-content>.tab-pane.active>.bslib-sidebar-layout:not([data-bslib-sidebar-border-radius="true"]), -.navbar+.container-sm>.tab-content>.tab-pane.active>.bslib-sidebar-layout:not([data-bslib-sidebar-border-radius="true"]), -.navbar+.container-md>.tab-content>.tab-pane.active>.bslib-sidebar-layout:not([data-bslib-sidebar-border-radius="true"]), -.navbar+.container-lg>.tab-content>.tab-pane.active>.bslib-sidebar-layout:not([data-bslib-sidebar-border-radius="true"]), -.navbar+.container-xl>.tab-content>.tab-pane.active>.bslib-sidebar-layout:not([data-bslib-sidebar-border-radius="true"]), -.navbar+.container-xxl>.tab-content>.tab-pane.active>.bslib-sidebar-layout:not([data-bslib-sidebar-border-radius="true"]) { - border-radius: 0 -} - -.bslib-value-box .value-box-grid { - grid-template-columns: var(--bslib-value-box-widths) -} - -.bslib-value-box .value-box-showcase { - align-items: center; - justify-content: center; - margin-top: auto; - margin-bottom: auto; - padding: 1rem; - max-height: var(--bslib-value-box-max-height) -} - -.bslib-value-box .value-box-showcase .bi, -.bslib-value-box .value-box-showcase .fa { - opacity: .85 -} - -.bslib-value-box .value-box-showcase .bi { - font-size: 5rem -} - -.bslib-value-box .value-box-showcase .fa { - font-size: 4rem -} - -.bslib-value-box .value-box-showcase.showcase-top-right { - align-items: end; - padding-left: 0; - padding-bottom: 0 -} - -.bslib-value-box .value-box-area { - justify-content: center; - padding: 1.5rem 1rem; - font-size: .9rem; - font-weight: 500 -} - -.bslib-value-box .value-box-area * { - color: inherit; - margin-bottom: 0; - margin-top: 0 -} - -.bslib-value-box .value-box-area.border-start { - border-color: rgba(223, 215, 202, 0.3) !important -} - -.bslib-value-box.bslib-full-screen .value-box-grid { - grid-template-columns: var(--bslib-value-box-widths-full-screen) -} - -.bslib-value-box.bslib-full-screen .value-box-showcase { - max-height: var(--bslib-value-box-max-height-full-screen) -} - -.bslib-value-box:not(.bslib-full-screen) .value-box-showcase.showcase-top-right { - margin-top: 0 -} - -@media (max-width: 575.98px) { - .bslib-value-box .value-box-grid { - grid-template-columns: var(--bslib-value-box-widths) !important - } -} - -@media (min-width: 576px) { - .nav:not(.nav-hidden) { - display: flex !important; - display: -webkit-flex !important - } - - .nav:not(.nav-hidden):not(.nav-stacked):not(.flex-column) { - float: none !important - } - - .nav:not(.nav-hidden):not(.nav-stacked):not(.flex-column)>.bslib-nav-spacer { - margin-left: auto !important - } - - .nav:not(.nav-hidden):not(.nav-stacked):not(.flex-column)>.form-inline { - margin-top: auto; - margin-bottom: auto - } - - .nav:not(.nav-hidden).nav-stacked { - flex-direction: column; - -webkit-flex-direction: column; - height: 100% - } - - .nav:not(.nav-hidden).nav-stacked>.bslib-nav-spacer { - margin-top: auto !important - } -} - -:root { - color-scheme: light -} - -.sandstone, -.tooltip, -.dropdown-menu .dropdown-item, -.dropdown-menu>li>a, -.pagination, -.breadcrumb, -.nav-pills .nav-link, -.nav-pills ul.nav.navbar-nav>li>a, -.nav-pills .nav-tabs>li>a, -.nav-pills>li>a, -.nav-tabs .nav-link, -.nav-tabs ul.nav.navbar-nav>li>a, -.nav-tabs>li>a, -.nav-tabs .nav-pills>li>a, -.btn, -.navbar .nav-link, -.navbar ul.nav.navbar-nav>li>a, -.navbar .nav-tabs>li>a, -.navbar .nav-pills>li>a { - font-size: 13px; - font-weight: 500; - line-height: 22px; - text-transform: uppercase -} - -.navbar-form input, -.navbar-form .form-control { - border: none -} - -.btn:hover { - border-color: transparent -} - -.btn-success, -.btn-warning { - color: #fff -} - -.table .thead-dark th { - background-color: #3e3f3a -} - -.nav-tabs .nav-link, -.nav-tabs ul.nav.navbar-nav>li>a, -.nav-tabs>li>a, -.nav-tabs .nav-pills>li>a { - background-color: #f8f5f0; - border-color: #dfd7ca -} - -.nav-tabs .nav-link, -.nav-tabs ul.nav.navbar-nav>li>a, -.nav-tabs>li>a, -.nav-tabs .nav-pills>li>a, -.nav-tabs .nav-link:hover, -.nav-tabs .nav-link:focus { - color: #8e8c84 -} - -.nav-tabs .nav-link.disabled, -.nav-tabs ul.nav.navbar-nav>li>a.disabled, -.nav-tabs>li>a.disabled, -.nav-tabs .nav-pills>li>a.disabled, -.nav-tabs .nav-link.disabled:hover, -.nav-tabs .nav-link.disabled:focus { - color: #dfd7ca; - background-color: #f8f5f0; - border-color: #dfd7ca -} - -.nav-pills .nav-link, -.nav-pills ul.nav.navbar-nav>li>a, -.nav-pills .nav-tabs>li>a, -.nav-pills>li>a { - color: #8e8c84; - border: 1px solid transparent -} - -.nav-pills .nav-link.active, -.nav-pills ul.nav.navbar-nav>li>a.active, -.nav-pills .nav-tabs>li>a.active, -.nav-pills>li>a.active, -.nav-pills .nav-link:hover, -.nav-pills ul.nav.navbar-nav>li>a:hover, -.nav-pills .nav-tabs>li>a:hover, -.nav-pills>li>a:hover, -.nav-pills .nav-link:focus, -.nav-pills ul.nav.navbar-nav>li>a:focus, -.nav-pills .nav-tabs>li>a:focus, -.nav-pills>li>a:focus { - background-color: #f8f5f0; - border-color: #dfd7ca -} - -.nav-pills .nav-link.disabled, -.nav-pills ul.nav.navbar-nav>li>a.disabled, -.nav-pills .nav-tabs>li>a.disabled, -.nav-pills>li>a.disabled, -.nav-pills .nav-link.disabled:hover { - color: #dfd7ca; - background-color: transparent; - border-color: transparent -} - -.breadcrumb { - border: 1px solid #dfd7ca -} - -.pagination a:hover { - text-decoration: none -} - -.alert { - color: #fff -} - -.alert a, -.alert .alert-link { - color: #fff; - text-decoration: underline -} - -.alert-primary, -.alert-primary>th, -.alert-primary>td { - background-color: #325d88 -} - -.alert-secondary, -.alert-secondary>th, -.alert-secondary>td { - background-color: #8e8c84 -} - -.alert-success, -.alert-success>th, -.alert-success>td { - background-color: #93c54b -} - -.alert-info, -.alert-info>th, -.alert-info>td { - background-color: #29abe0 -} - -.alert-danger, -.alert-danger>th, -.alert-danger>td { - background-color: #d9534f -} - -.alert-warning, -.alert-warning>th, -.alert-warning>td { - background-color: #f47c3c -} - -.alert-dark, -.alert-dark>th, -.alert-dark>td { - background-color: #3e3f3a -} - -.alert-light, -.alert-light>th, -.alert-light>td { - background-color: #f8f5f0 -} - -.alert-light, -.alert-light a:not(.btn), -.alert-light .alert-link { - color: #3e3f3a -} - -.badge.bg-light { - color: #3e3f3a -} - -.modal .btn-close, -.toast .btn-close, -.offcanvas .btn-close { - background-image: url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 16 16' fill='%23dfd7ca'%3e%3cpath d='M.293.293a1 1 0 0 1 1.414 0L8 6.586 14.293.293a1 1 0 1 1 1.414 1.414L9.414 8l6.293 6.293a1 1 0 0 1-1.414 1.414L8 9.414l-6.293 6.293a1 1 0 0 1-1.414-1.414L6.586 8 .293 1.707a1 1 0 0 1 0-1.414z'/%3e%3c/svg%3e") -} diff --git a/spaces/gwang-kim/DATID-3D/eg3d/camera_utils.py b/spaces/gwang-kim/DATID-3D/eg3d/camera_utils.py deleted file mode 100644 index 4d4be88a575b4f43cce42f71222215e9b912d9f9..0000000000000000000000000000000000000000 --- a/spaces/gwang-kim/DATID-3D/eg3d/camera_utils.py +++ /dev/null @@ -1,149 +0,0 @@ -# SPDX-FileCopyrightText: Copyright (c) 2021-2022 NVIDIA CORPORATION & AFFILIATES. All rights reserved. -# SPDX-License-Identifier: LicenseRef-NvidiaProprietary -# -# NVIDIA CORPORATION, its affiliates and licensors retain all intellectual -# property and proprietary rights in and to this material, related -# documentation and any modifications thereto. Any use, reproduction, -# disclosure or distribution of this material and related documentation -# without an express license agreement from NVIDIA CORPORATION or -# its affiliates is strictly prohibited. - -""" -Helper functions for constructing camera parameter matrices. Primarily used in visualization and inference scripts. -""" - -import math - -import torch -import torch.nn as nn - -from training.volumetric_rendering import math_utils - -class GaussianCameraPoseSampler: - """ - Samples pitch and yaw from a Gaussian distribution and returns a camera pose. - Camera is specified as looking at the origin. - If horizontal and vertical stddev (specified in radians) are zero, gives a - deterministic camera pose with yaw=horizontal_mean, pitch=vertical_mean. - The coordinate system is specified with y-up, z-forward, x-left. - Horizontal mean is the azimuthal angle (rotation around y axis) in radians, - vertical mean is the polar angle (angle from the y axis) in radians. - A point along the z-axis has azimuthal_angle=0, polar_angle=pi/2. - - Example: - For a camera pose looking at the origin with the camera at position [0, 0, 1]: - cam2world = GaussianCameraPoseSampler.sample(math.pi/2, math.pi/2, radius=1) - """ - - @staticmethod - def sample(horizontal_mean, vertical_mean, horizontal_stddev=0, vertical_stddev=0, radius=1, batch_size=1, device='cpu'): - h = torch.randn((batch_size, 1), device=device) * horizontal_stddev + horizontal_mean - v = torch.randn((batch_size, 1), device=device) * vertical_stddev + vertical_mean - v = torch.clamp(v, 1e-5, math.pi - 1e-5) - - theta = h - v = v / math.pi - phi = torch.arccos(1 - 2*v) - - camera_origins = torch.zeros((batch_size, 3), device=device) - - camera_origins[:, 0:1] = radius*torch.sin(phi) * torch.cos(math.pi-theta) - camera_origins[:, 2:3] = radius*torch.sin(phi) * torch.sin(math.pi-theta) - camera_origins[:, 1:2] = radius*torch.cos(phi) - - forward_vectors = math_utils.normalize_vecs(-camera_origins) - return create_cam2world_matrix(forward_vectors, camera_origins) - - -class LookAtPoseSampler: - """ - Same as GaussianCameraPoseSampler, except the - camera is specified as looking at 'lookat_position', a 3-vector. - - Example: - For a camera pose looking at the origin with the camera at position [0, 0, 1]: - cam2world = LookAtPoseSampler.sample(math.pi/2, math.pi/2, torch.tensor([0, 0, 0]), radius=1) - """ - - @staticmethod - def sample(horizontal_mean, vertical_mean, lookat_position, horizontal_stddev=0, vertical_stddev=0, radius=1, batch_size=1, device='cpu'): - h = torch.randn((batch_size, 1), device=device) * horizontal_stddev + horizontal_mean - v = torch.randn((batch_size, 1), device=device) * vertical_stddev + vertical_mean - v = torch.clamp(v, 1e-5, math.pi - 1e-5) - - theta = h - v = v / math.pi - phi = torch.arccos(1 - 2*v) - - camera_origins = torch.zeros((batch_size, 3), device=device) - - camera_origins[:, 0:1] = radius*torch.sin(phi) * torch.cos(math.pi-theta) - camera_origins[:, 2:3] = radius*torch.sin(phi) * torch.sin(math.pi-theta) - camera_origins[:, 1:2] = radius*torch.cos(phi) - - # forward_vectors = math_utils.normalize_vecs(-camera_origins) - forward_vectors = math_utils.normalize_vecs(lookat_position - camera_origins) - return create_cam2world_matrix(forward_vectors, camera_origins) - -class UniformCameraPoseSampler: - """ - Same as GaussianCameraPoseSampler, except the - pose is sampled from a uniform distribution with range +-[horizontal/vertical]_stddev. - - Example: - For a batch of random camera poses looking at the origin with yaw sampled from [-pi/2, +pi/2] radians: - - cam2worlds = UniformCameraPoseSampler.sample(math.pi/2, math.pi/2, horizontal_stddev=math.pi/2, radius=1, batch_size=16) - """ - - @staticmethod - def sample(horizontal_mean, vertical_mean, horizontal_stddev=0, vertical_stddev=0, radius=1, batch_size=1, device='cpu'): - h = (torch.rand((batch_size, 1), device=device) * 2 - 1) * horizontal_stddev + horizontal_mean - v = (torch.rand((batch_size, 1), device=device) * 2 - 1) * vertical_stddev + vertical_mean - v = torch.clamp(v, 1e-5, math.pi - 1e-5) - - theta = h - v = v / math.pi - phi = torch.arccos(1 - 2*v) - - camera_origins = torch.zeros((batch_size, 3), device=device) - - camera_origins[:, 0:1] = radius*torch.sin(phi) * torch.cos(math.pi-theta) - camera_origins[:, 2:3] = radius*torch.sin(phi) * torch.sin(math.pi-theta) - camera_origins[:, 1:2] = radius*torch.cos(phi) - - forward_vectors = math_utils.normalize_vecs(-camera_origins) - return create_cam2world_matrix(forward_vectors, camera_origins) - -def create_cam2world_matrix(forward_vector, origin): - """ - Takes in the direction the camera is pointing and the camera origin and returns a cam2world matrix. - Works on batches of forward_vectors, origins. Assumes y-axis is up and that there is no camera roll. - """ - - forward_vector = math_utils.normalize_vecs(forward_vector) - up_vector = torch.tensor([0, 1, 0], dtype=torch.float, device=origin.device).expand_as(forward_vector) - - right_vector = -math_utils.normalize_vecs(torch.cross(up_vector, forward_vector, dim=-1)) - up_vector = math_utils.normalize_vecs(torch.cross(forward_vector, right_vector, dim=-1)) - - rotation_matrix = torch.eye(4, device=origin.device).unsqueeze(0).repeat(forward_vector.shape[0], 1, 1) - rotation_matrix[:, :3, :3] = torch.stack((right_vector, up_vector, forward_vector), axis=-1) - - translation_matrix = torch.eye(4, device=origin.device).unsqueeze(0).repeat(forward_vector.shape[0], 1, 1) - translation_matrix[:, :3, 3] = origin - cam2world = (translation_matrix @ rotation_matrix)[:, :, :] - assert(cam2world.shape[1:] == (4, 4)) - return cam2world - - -def FOV_to_intrinsics(fov_degrees, device='cpu'): - """ - Creates a 3x3 camera intrinsics matrix from the camera field of view, specified in degrees. - Note the intrinsics are returned as normalized by image size, rather than in pixel units. - Assumes principal point is at image center. - """ - - focal_length = float(1 / (math.tan(fov_degrees * 3.14159 / 360) * 1.414)) - intrinsics = torch.tensor([[focal_length, 0, 0.5], [0, focal_length, 0.5], [0, 0, 1]], device=device) - return intrinsics \ No newline at end of file diff --git a/spaces/gyugnsu/DragGan-Inversion/PTI/models/StyleCLIP/global_directions/utils/visualizer.py b/spaces/gyugnsu/DragGan-Inversion/PTI/models/StyleCLIP/global_directions/utils/visualizer.py deleted file mode 100644 index 8c4a1fba06bf6bc680aa59bf645f796283f6f1c6..0000000000000000000000000000000000000000 --- a/spaces/gyugnsu/DragGan-Inversion/PTI/models/StyleCLIP/global_directions/utils/visualizer.py +++ /dev/null @@ -1,605 +0,0 @@ -# python 3.7 -"""Utility functions for visualizing results on html page.""" - -import base64 -import os.path -import cv2 -import numpy as np - -__all__ = [ - 'get_grid_shape', 'get_blank_image', 'load_image', 'save_image', - 'resize_image', 'add_text_to_image', 'fuse_images', 'HtmlPageVisualizer', - 'VideoReader', 'VideoWriter', 'adjust_pixel_range' -] - - -def adjust_pixel_range(images, min_val=-1.0, max_val=1.0, channel_order='NCHW'): - """Adjusts the pixel range of the input images. - - This function assumes the input array (image batch) is with shape [batch_size, - channel, height, width] if `channel_order = NCHW`, or with shape [batch_size, - height, width] if `channel_order = NHWC`. The returned images are with shape - [batch_size, height, width, channel] and pixel range [0, 255]. - - NOTE: The channel order of output images will remain the same as the input. - - Args: - images: Input images to adjust pixel range. - min_val: Min value of the input images. (default: -1.0) - max_val: Max value of the input images. (default: 1.0) - channel_order: Channel order of the input array. (default: NCHW) - - Returns: - The postprocessed images with dtype `numpy.uint8` and range [0, 255]. - - Raises: - ValueError: If the input `images` are not with type `numpy.ndarray` or the - shape is invalid according to `channel_order`. - """ - if not isinstance(images, np.ndarray): - raise ValueError(f'Images should be with type `numpy.ndarray`!') - - channel_order = channel_order.upper() - if channel_order not in ['NCHW', 'NHWC']: - raise ValueError(f'Invalid channel order `{channel_order}`!') - - if images.ndim != 4: - raise ValueError(f'Input images are expected to be with shape `NCHW` or ' - f'`NHWC`, but `{images.shape}` is received!') - if channel_order == 'NCHW' and images.shape[1] not in [1, 3]: - raise ValueError(f'Input images should have 1 or 3 channels under `NCHW` ' - f'channel order!') - if channel_order == 'NHWC' and images.shape[3] not in [1, 3]: - raise ValueError(f'Input images should have 1 or 3 channels under `NHWC` ' - f'channel order!') - - images = images.astype(np.float32) - images = (images - min_val) * 255 / (max_val - min_val) - images = np.clip(images + 0.5, 0, 255).astype(np.uint8) - if channel_order == 'NCHW': - images = images.transpose(0, 2, 3, 1) - - return images - - -def get_grid_shape(size, row=0, col=0, is_portrait=False): - """Gets the shape of a grid based on the size. - - This function makes greatest effort on making the output grid square if - neither `row` nor `col` is set. If `is_portrait` is set as `False`, the height - will always be equal to or smaller than the width. For example, if input - `size = 16`, output shape will be `(4, 4)`; if input `size = 15`, output shape - will be (3, 5). Otherwise, the height will always be equal to or larger than - the width. - - Args: - size: Size (height * width) of the target grid. - is_portrait: Whether to return a portrait size of a landscape size. - (default: False) - - Returns: - A two-element tuple, representing height and width respectively. - """ - assert isinstance(size, int) - assert isinstance(row, int) - assert isinstance(col, int) - if size == 0: - return (0, 0) - - if row > 0 and col > 0 and row * col != size: - row = 0 - col = 0 - - if row > 0 and size % row == 0: - return (row, size // row) - if col > 0 and size % col == 0: - return (size // col, col) - - row = int(np.sqrt(size)) - while row > 0: - if size % row == 0: - col = size // row - break - row = row - 1 - - return (col, row) if is_portrait else (row, col) - - -def get_blank_image(height, width, channels=3, is_black=True): - """Gets a blank image, either white of black. - - NOTE: This function will always return an image with `RGB` channel order for - color image and pixel range [0, 255]. - - Args: - height: Height of the returned image. - width: Width of the returned image. - channels: Number of channels. (default: 3) - is_black: Whether to return a black image or white image. (default: True) - """ - shape = (height, width, channels) - if is_black: - return np.zeros(shape, dtype=np.uint8) - return np.ones(shape, dtype=np.uint8) * 255 - - -def load_image(path): - """Loads an image from disk. - - NOTE: This function will always return an image with `RGB` channel order for - color image and pixel range [0, 255]. - - Args: - path: Path to load the image from. - - Returns: - An image with dtype `np.ndarray` or `None` if input `path` does not exist. - """ - if not os.path.isfile(path): - return None - - image = cv2.imread(path) - return image[:, :, ::-1] - - -def save_image(path, image): - """Saves an image to disk. - - NOTE: The input image (if colorful) is assumed to be with `RGB` channel order - and pixel range [0, 255]. - - Args: - path: Path to save the image to. - image: Image to save. - """ - if image is None: - return - - assert len(image.shape) == 3 and image.shape[2] in [1, 3] - cv2.imwrite(path, image[:, :, ::-1]) - - -def resize_image(image, *args, **kwargs): - """Resizes image. - - This is a wrap of `cv2.resize()`. - - NOTE: THe channel order of the input image will not be changed. - - Args: - image: Image to resize. - """ - if image is None: - return None - - assert image.ndim == 3 and image.shape[2] in [1, 3] - image = cv2.resize(image, *args, **kwargs) - if image.ndim == 2: - return image[:, :, np.newaxis] - return image - - -def add_text_to_image(image, - text='', - position=None, - font=cv2.FONT_HERSHEY_TRIPLEX, - font_size=1.0, - line_type=cv2.LINE_8, - line_width=1, - color=(255, 255, 255)): - """Overlays text on given image. - - NOTE: The input image is assumed to be with `RGB` channel order. - - Args: - image: The image to overlay text on. - text: Text content to overlay on the image. (default: '') - position: Target position (bottom-left corner) to add text. If not set, - center of the image will be used by default. (default: None) - font: Font of the text added. (default: cv2.FONT_HERSHEY_TRIPLEX) - font_size: Font size of the text added. (default: 1.0) - line_type: Line type used to depict the text. (default: cv2.LINE_8) - line_width: Line width used to depict the text. (default: 1) - color: Color of the text added in `RGB` channel order. (default: - (255, 255, 255)) - - Returns: - An image with target text overlayed on. - """ - if image is None or not text: - return image - - cv2.putText(img=image, - text=text, - org=position, - fontFace=font, - fontScale=font_size, - color=color, - thickness=line_width, - lineType=line_type, - bottomLeftOrigin=False) - - return image - - -def fuse_images(images, - image_size=None, - row=0, - col=0, - is_row_major=True, - is_portrait=False, - row_spacing=0, - col_spacing=0, - border_left=0, - border_right=0, - border_top=0, - border_bottom=0, - black_background=True): - """Fuses a collection of images into an entire image. - - Args: - images: A collection of images to fuse. Should be with shape [num, height, - width, channels]. - image_size: Int or two-element tuple. This field is used to resize the image - before fusing. `None` disables resizing. (default: None) - row: Number of rows used for image fusion. If not set, this field will be - automatically assigned based on `col` and total number of images. - (default: None) - col: Number of columns used for image fusion. If not set, this field will be - automatically assigned based on `row` and total number of images. - (default: None) - is_row_major: Whether the input images should be arranged row-major or - column-major. (default: True) - is_portrait: Only active when both `row` and `col` should be assigned - automatically. (default: False) - row_spacing: Space between rows. (default: 0) - col_spacing: Space between columns. (default: 0) - border_left: Width of left border. (default: 0) - border_right: Width of right border. (default: 0) - border_top: Width of top border. (default: 0) - border_bottom: Width of bottom border. (default: 0) - - Returns: - The fused image. - - Raises: - ValueError: If the input `images` is not with shape [num, height, width, - width]. - """ - if images is None: - return images - - if not images.ndim == 4: - raise ValueError(f'Input `images` should be with shape [num, height, ' - f'width, channels], but {images.shape} is received!') - - num, image_height, image_width, channels = images.shape - if image_size is not None: - if isinstance(image_size, int): - image_size = (image_size, image_size) - assert isinstance(image_size, (list, tuple)) and len(image_size) == 2 - width, height = image_size - else: - height, width = image_height, image_width - row, col = get_grid_shape(num, row=row, col=col, is_portrait=is_portrait) - fused_height = ( - height * row + row_spacing * (row - 1) + border_top + border_bottom) - fused_width = ( - width * col + col_spacing * (col - 1) + border_left + border_right) - fused_image = get_blank_image( - fused_height, fused_width, channels=channels, is_black=black_background) - images = images.reshape(row, col, image_height, image_width, channels) - if not is_row_major: - images = images.transpose(1, 0, 2, 3, 4) - - for i in range(row): - y = border_top + i * (height + row_spacing) - for j in range(col): - x = border_left + j * (width + col_spacing) - if image_size is not None: - image = cv2.resize(images[i, j], image_size) - else: - image = images[i, j] - fused_image[y:y + height, x:x + width] = image - - return fused_image - - -def get_sortable_html_header(column_name_list, sort_by_ascending=False): - """Gets header for sortable html page. - - Basically, the html page contains a sortable table, where user can sort the - rows by a particular column by clicking the column head. - - Example: - - column_name_list = [name_1, name_2, name_3] - header = get_sortable_html_header(column_name_list) - footer = get_sortable_html_footer() - sortable_table = ... - html_page = header + sortable_table + footer - - Args: - column_name_list: List of column header names. - sort_by_ascending: Default sorting order. If set as `True`, the html page - will be sorted by ascending order when the header is clicked for the first - time. - - Returns: - A string, which represents for the header for a sortable html page. - """ - header = '\n'.join([ - '', - '', - '', - '', - '', - '', - '', - '', - '', - '', - '', - '', - '', - '']) - for idx, column_name in enumerate(column_name_list): - header += f' \n' - header += '\n' - header += '\n' - header += '\n' - - return header - - -def get_sortable_html_footer(): - """Gets footer for sortable html page. - - Check function `get_sortable_html_header()` for more details. - """ - return '\n
                                                                                                                                              {column_name}
                                                                                                                                              \n\n\n\n' - - -def encode_image_to_html_str(image, image_size=None): - """Encodes an image to html language. - - Args: - image: The input image to encode. Should be with `RGB` channel order. - image_size: Int or two-element tuple. This field is used to resize the image - before encoding. `None` disables resizing. (default: None) - - Returns: - A string which represents the encoded image. - """ - if image is None: - return '' - - assert len(image.shape) == 3 and image.shape[2] in [1, 3] - - # Change channel order to `BGR`, which is opencv-friendly. - image = image[:, :, ::-1] - - # Resize the image if needed. - if image_size is not None: - if isinstance(image_size, int): - image_size = (image_size, image_size) - assert isinstance(image_size, (list, tuple)) and len(image_size) == 2 - image = cv2.resize(image, image_size) - - # Encode the image to html-format string. - encoded_image = cv2.imencode(".jpg", image)[1].tostring() - encoded_image_base64 = base64.b64encode(encoded_image).decode('utf-8') - html_str = f'' - - return html_str - - -class HtmlPageVisualizer(object): - """Defines the html page visualizer. - - This class can be used to visualize image results as html page. Basically, it - is based on an html-format sorted table with helper functions - `get_sortable_html_header()`, `get_sortable_html_footer()`, and - `encode_image_to_html_str()`. To simplify the usage, specifying the following - fields is enough to create a visualization page: - - (1) num_rows: Number of rows of the table (header-row exclusive). - (2) num_cols: Number of columns of the table. - (3) header contents (optional): Title of each column. - - NOTE: `grid_size` can be used to assign `num_rows` and `num_cols` - automatically. - - Example: - - html = HtmlPageVisualizer(num_rows, num_cols) - html.set_headers([...]) - for i in range(num_rows): - for j in range(num_cols): - html.set_cell(i, j, text=..., image=...) - html.save('visualize.html') - """ - - def __init__(self, - num_rows=0, - num_cols=0, - grid_size=0, - is_portrait=False, - viz_size=None): - if grid_size > 0: - num_rows, num_cols = get_grid_shape( - grid_size, row=num_rows, col=num_cols, is_portrait=is_portrait) - assert num_rows > 0 and num_cols > 0 - - self.num_rows = num_rows - self.num_cols = num_cols - self.viz_size = viz_size - self.headers = ['' for _ in range(self.num_cols)] - self.cells = [[{ - 'text': '', - 'image': '', - } for _ in range(self.num_cols)] for _ in range(self.num_rows)] - - def set_header(self, column_idx, content): - """Sets the content of a particular header by column index.""" - self.headers[column_idx] = content - - def set_headers(self, contents): - """Sets the contents of all headers.""" - if isinstance(contents, str): - contents = [contents] - assert isinstance(contents, (list, tuple)) - assert len(contents) == self.num_cols - for column_idx, content in enumerate(contents): - self.set_header(column_idx, content) - - def set_cell(self, row_idx, column_idx, text='', image=None): - """Sets the content of a particular cell. - - Basically, a cell contains some text as well as an image. Both text and - image can be empty. - - Args: - row_idx: Row index of the cell to edit. - column_idx: Column index of the cell to edit. - text: Text to add into the target cell. - image: Image to show in the target cell. Should be with `RGB` channel - order. - """ - self.cells[row_idx][column_idx]['text'] = text - self.cells[row_idx][column_idx]['image'] = encode_image_to_html_str( - image, self.viz_size) - - def save(self, save_path): - """Saves the html page.""" - html = '' - for i in range(self.num_rows): - html += f'\n' - for j in range(self.num_cols): - text = self.cells[i][j]['text'] - image = self.cells[i][j]['image'] - if text: - html += f' {text}

                                                                                                                                              {image}\n' - else: - html += f' {image}\n' - html += f'\n' - - header = get_sortable_html_header(self.headers) - footer = get_sortable_html_footer() - - with open(save_path, 'w') as f: - f.write(header + html + footer) - - -class VideoReader(object): - """Defines the video reader. - - This class can be used to read frames from a given video. - """ - - def __init__(self, path): - """Initializes the video reader by loading the video from disk.""" - if not os.path.isfile(path): - raise ValueError(f'Video `{path}` does not exist!') - - self.path = path - self.video = cv2.VideoCapture(path) - assert self.video.isOpened() - self.position = 0 - - self.length = int(self.video.get(cv2.CAP_PROP_FRAME_COUNT)) - self.frame_height = int(self.video.get(cv2.CAP_PROP_FRAME_HEIGHT)) - self.frame_width = int(self.video.get(cv2.CAP_PROP_FRAME_WIDTH)) - self.fps = self.video.get(cv2.CAP_PROP_FPS) - - def __del__(self): - """Releases the opened video.""" - self.video.release() - - def read(self, position=None): - """Reads a certain frame. - - NOTE: The returned frame is assumed to be with `RGB` channel order. - - Args: - position: Optional. If set, the reader will read frames from the exact - position. Otherwise, the reader will read next frames. (default: None) - """ - if position is not None and position < self.length: - self.video.set(cv2.CAP_PROP_POS_FRAMES, position) - self.position = position - - success, frame = self.video.read() - self.position = self.position + 1 - - return frame[:, :, ::-1] if success else None - - -class VideoWriter(object): - """Defines the video writer. - - This class can be used to create a video. - - NOTE: `.avi` and `DIVX` is the most recommended codec format since it does not - rely on other dependencies. - """ - - def __init__(self, path, frame_height, frame_width, fps=24, codec='DIVX'): - """Creates the video writer.""" - self.path = path - self.frame_height = frame_height - self.frame_width = frame_width - self.fps = fps - self.codec = codec - - self.video = cv2.VideoWriter(filename=path, - fourcc=cv2.VideoWriter_fourcc(*codec), - fps=fps, - frameSize=(frame_width, frame_height)) - - def __del__(self): - """Releases the opened video.""" - self.video.release() - - def write(self, frame): - """Writes a target frame. - - NOTE: The input frame is assumed to be with `RGB` channel order. - """ - self.video.write(frame[:, :, ::-1]) diff --git a/spaces/hahahehe99340/chatgpt/assets/custom.js b/spaces/hahahehe99340/chatgpt/assets/custom.js deleted file mode 100644 index 7b1761043149ff97ca498501c87a0d15db5258ee..0000000000000000000000000000000000000000 --- a/spaces/hahahehe99340/chatgpt/assets/custom.js +++ /dev/null @@ -1 +0,0 @@ -// custom javascript here \ No newline at end of file diff --git a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/detectron2/checkpoint/catalog.py b/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/detectron2/checkpoint/catalog.py deleted file mode 100644 index 62f81f3c1531e2726400cba4c97b60d744670da5..0000000000000000000000000000000000000000 --- a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/detectron2/checkpoint/catalog.py +++ /dev/null @@ -1,134 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved -import logging -from fvcore.common.file_io import PathHandler, PathManager - - -class ModelCatalog(object): - """ - Store mappings from names to third-party models. - """ - - S3_C2_DETECTRON_PREFIX = "https://dl.fbaipublicfiles.com/detectron" - - # MSRA models have STRIDE_IN_1X1=True. False otherwise. - # NOTE: all BN models here have fused BN into an affine layer. - # As a result, you should only load them to a model with "FrozenBN". - # Loading them to a model with regular BN or SyncBN is wrong. - # Even when loaded to FrozenBN, it is still different from affine by an epsilon, - # which should be negligible for training. - # NOTE: all models here uses PIXEL_STD=[1,1,1] - # NOTE: Most of the BN models here are no longer used. We use the - # re-converted pre-trained models under detectron2 model zoo instead. - C2_IMAGENET_MODELS = { - "MSRA/R-50": "ImageNetPretrained/MSRA/R-50.pkl", - "MSRA/R-101": "ImageNetPretrained/MSRA/R-101.pkl", - "FAIR/R-50-GN": "ImageNetPretrained/47261647/R-50-GN.pkl", - "FAIR/R-101-GN": "ImageNetPretrained/47592356/R-101-GN.pkl", - "FAIR/X-101-32x8d": "ImageNetPretrained/20171220/X-101-32x8d.pkl", - "FAIR/X-101-64x4d": "ImageNetPretrained/FBResNeXt/X-101-64x4d.pkl", - "FAIR/X-152-32x8d-IN5k": "ImageNetPretrained/25093814/X-152-32x8d-IN5k.pkl", - } - - C2_DETECTRON_PATH_FORMAT = ( - "{prefix}/{url}/output/train/{dataset}/{type}/model_final.pkl" # noqa B950 - ) - - C2_DATASET_COCO = "coco_2014_train%3Acoco_2014_valminusminival" - C2_DATASET_COCO_KEYPOINTS = "keypoints_coco_2014_train%3Akeypoints_coco_2014_valminusminival" - - # format: {model_name} -> part of the url - C2_DETECTRON_MODELS = { - "35857197/e2e_faster_rcnn_R-50-C4_1x": "35857197/12_2017_baselines/e2e_faster_rcnn_R-50-C4_1x.yaml.01_33_49.iAX0mXvW", # noqa B950 - "35857345/e2e_faster_rcnn_R-50-FPN_1x": "35857345/12_2017_baselines/e2e_faster_rcnn_R-50-FPN_1x.yaml.01_36_30.cUF7QR7I", # noqa B950 - "35857890/e2e_faster_rcnn_R-101-FPN_1x": "35857890/12_2017_baselines/e2e_faster_rcnn_R-101-FPN_1x.yaml.01_38_50.sNxI7sX7", # noqa B950 - "36761737/e2e_faster_rcnn_X-101-32x8d-FPN_1x": "36761737/12_2017_baselines/e2e_faster_rcnn_X-101-32x8d-FPN_1x.yaml.06_31_39.5MIHi1fZ", # noqa B950 - "35858791/e2e_mask_rcnn_R-50-C4_1x": "35858791/12_2017_baselines/e2e_mask_rcnn_R-50-C4_1x.yaml.01_45_57.ZgkA7hPB", # noqa B950 - "35858933/e2e_mask_rcnn_R-50-FPN_1x": "35858933/12_2017_baselines/e2e_mask_rcnn_R-50-FPN_1x.yaml.01_48_14.DzEQe4wC", # noqa B950 - "35861795/e2e_mask_rcnn_R-101-FPN_1x": "35861795/12_2017_baselines/e2e_mask_rcnn_R-101-FPN_1x.yaml.02_31_37.KqyEK4tT", # noqa B950 - "36761843/e2e_mask_rcnn_X-101-32x8d-FPN_1x": "36761843/12_2017_baselines/e2e_mask_rcnn_X-101-32x8d-FPN_1x.yaml.06_35_59.RZotkLKI", # noqa B950 - "48616381/e2e_mask_rcnn_R-50-FPN_2x_gn": "GN/48616381/04_2018_gn_baselines/e2e_mask_rcnn_R-50-FPN_2x_gn_0416.13_23_38.bTlTI97Q", # noqa B950 - "37697547/e2e_keypoint_rcnn_R-50-FPN_1x": "37697547/12_2017_baselines/e2e_keypoint_rcnn_R-50-FPN_1x.yaml.08_42_54.kdzV35ao", # noqa B950 - "35998355/rpn_R-50-C4_1x": "35998355/12_2017_baselines/rpn_R-50-C4_1x.yaml.08_00_43.njH5oD9L", # noqa B950 - "35998814/rpn_R-50-FPN_1x": "35998814/12_2017_baselines/rpn_R-50-FPN_1x.yaml.08_06_03.Axg0r179", # noqa B950 - "36225147/fast_R-50-FPN_1x": "36225147/12_2017_baselines/fast_rcnn_R-50-FPN_1x.yaml.08_39_09.L3obSdQ2", # noqa B950 - } - - @staticmethod - def get(name): - if name.startswith("Caffe2Detectron/COCO"): - return ModelCatalog._get_c2_detectron_baseline(name) - if name.startswith("ImageNetPretrained/"): - return ModelCatalog._get_c2_imagenet_pretrained(name) - raise RuntimeError("model not present in the catalog: {}".format(name)) - - @staticmethod - def _get_c2_imagenet_pretrained(name): - prefix = ModelCatalog.S3_C2_DETECTRON_PREFIX - name = name[len("ImageNetPretrained/") :] - name = ModelCatalog.C2_IMAGENET_MODELS[name] - url = "/".join([prefix, name]) - return url - - @staticmethod - def _get_c2_detectron_baseline(name): - name = name[len("Caffe2Detectron/COCO/") :] - url = ModelCatalog.C2_DETECTRON_MODELS[name] - if "keypoint_rcnn" in name: - dataset = ModelCatalog.C2_DATASET_COCO_KEYPOINTS - else: - dataset = ModelCatalog.C2_DATASET_COCO - - if "35998355/rpn_R-50-C4_1x" in name: - # this one model is somehow different from others .. - type = "rpn" - else: - type = "generalized_rcnn" - - # Detectron C2 models are stored in the structure defined in `C2_DETECTRON_PATH_FORMAT`. - url = ModelCatalog.C2_DETECTRON_PATH_FORMAT.format( - prefix=ModelCatalog.S3_C2_DETECTRON_PREFIX, url=url, type=type, dataset=dataset - ) - return url - - -class ModelCatalogHandler(PathHandler): - """ - Resolve URL like catalog://. - """ - - PREFIX = "catalog://" - - def _get_supported_prefixes(self): - return [self.PREFIX] - - def _get_local_path(self, path): - logger = logging.getLogger(__name__) - catalog_path = ModelCatalog.get(path[len(self.PREFIX) :]) - logger.info("Catalog entry {} points to {}".format(path, catalog_path)) - return PathManager.get_local_path(catalog_path) - - def _open(self, path, mode="r", **kwargs): - return PathManager.open(self._get_local_path(path), mode, **kwargs) - - -class Detectron2Handler(PathHandler): - """ - Resolve anything that's in Detectron2 model zoo. - """ - - PREFIX = "detectron2://" - S3_DETECTRON2_PREFIX = "https://dl.fbaipublicfiles.com/detectron2/" - - def _get_supported_prefixes(self): - return [self.PREFIX] - - def _get_local_path(self, path): - name = path[len(self.PREFIX) :] - return PathManager.get_local_path(self.S3_DETECTRON2_PREFIX + name) - - def _open(self, path, mode="r", **kwargs): - return PathManager.open(self._get_local_path(path), mode, **kwargs) - - -PathManager.register_handler(ModelCatalogHandler()) -PathManager.register_handler(Detectron2Handler()) diff --git a/spaces/heiyubili/bingo/README.md b/spaces/heiyubili/bingo/README.md deleted file mode 100644 index d65eafbc8431818f738e8e086455fa6159f101bb..0000000000000000000000000000000000000000 --- a/spaces/heiyubili/bingo/README.md +++ /dev/null @@ -1,196 +0,0 @@ ---- -title: bingo -emoji: 📉 -colorFrom: red -colorTo: red -sdk: docker -license: mit -duplicated_from: hf4all/bingo ---- - -
                                                                                                                                              - -# Bingo - -Bingo,一个让你呼吸顺畅 New Bing。 - -高度还原 New Bing 网页版的主要操作,国内可用,兼容绝大多数微软 Bing AI 的功能,可自行部署使用。 - -![Github stars](https://badgen.net/github/stars/weaigc/bingo?icon=github&label=stars) -![Gthub issues](https://img.shields.io/github/issues/weaigc/bingo) -[![docker build](https://github.com/weaigc/bingo/actions/workflows/docker.yml/badge.svg)](https://hub.docker.com/repository/docker/weaigc/bingo/) -[![docker hub](https://badgen.net/docker/size/weaigc/bingo?icon=docker&label=image%20size)](https://hub.docker.com/repository/docker/weaigc/bingo/) -[![MIT License](https://img.shields.io/badge/license-MIT-97c50f)](https://github.com/weaigc/bingo/blob/main/license) - -
                                                                                                                                              - -## 演示站点 - -https://bing.github1s.tk - - - -[![img](./docs/images/demo.png)](https://bing.github1s.tk) - -## 功能和特点 - -- 完全基于 Next.js 重写,高度还原 New Bing Web 版 UI,使用体验和 Bing AI 基本一致。 -- 支持 Docker 构建,方便快捷地部署和访问。 -- Cookie 可全局配置,全局共享。 -- 支持持续语音对话 - -## RoadMap - - - [x] 支持 wss 转发 - - [x] 支持一键部署 - - [x] 优化移动端展示 - - [x] 支持画图 - - [x] 支持语音输入(支持语音指令,目前仅支持 PC 版 Edge 及 Chrome 浏览器) - - [x] 支持语音输出(需要手动开启) - - [x] 支持图片输入 - - [x] 支持自定义域名 - - [ ] 支持历史记录 - - [ ] 适配深色模式 - - [ ] 支持内置提示词 - - [ ] 支持离线访问 - - [ ] 国际化翻译 - -## 一键部署 -你也可以一键部署自己的 New Bing AI 到 🤗 HuggingFace 。 - -### 部署到 Huggingface -1. 点击此图标 -[![Deploy to HuggingFace](https://img.shields.io/badge/%E7%82%B9%E5%87%BB%E9%83%A8%E7%BD%B2-%F0%9F%A4%97-fff)](https://huggingface.co/login?next=%2Fspaces%2Fhf4all%2Fbingo%3Fduplicate%3Dtrue%26visibility%3Dpublic),配置可以不改。 - -2. 部署署完成后,点击“设置” 》“站点域名”,点一下,复制一下 HF 域名信息,然后分享给别人即可。 - -> Huggingface 不支持绑定自己的域名,不过我们可以使用曲线救国的方式来达到这个目的 -> 1. 方式二,借助 Cloudflare Workers [部署Cloudflare Workers](#使用Cloudflare-Workers自定义域名) -> 2. 方式一,借助 Github Pages 及 iframe [如何绑定域名](https://github.com/weaigc/bingo/issues/4) - -### 使用Cloudflare Workers自定义域名 - -> 核心代码 [worker.js](./cloudflare/worker.js) - -- [注册 Cloudflare 账号](https://dash.cloudflare.com/sign-up) - -- 添加一个新的网站,需要你有自己的域名并且将域名`Name Server`托管给 Cloudflare 才行(更多信息可自行 Google) - -- 通过左侧菜单进入「Workers」,并点击「Create a Worker」。 - -- 创建 Worker 服务,复制 [worker.js](./cloudflare/worker.js) 全部代码,粘贴至创建的服务中,根据注释进行改动,保存并部署。 - -- 触发器 中自定义访问域名。 - -### 部署其它平台 -
                                                                                                                                              - -由于其他平台目前遭到 New Bing 封杀,会遇到很多问题,不再做推荐,有需要的可以自行查看 - - -#### 部署到 Netlify -[![Deploy to Netlify Button](https://www.netlify.com/img/deploy/button.svg)](https://app.netlify.com/start/deploy?repository=https://github.com/weaigc/bingo) - -#### 部署到 Vercel -如果你是 Vercel 付费用户,可以点以下链接一键部署到 Vercel。免费版本有[接口超时限制](https://vercel.com/docs/concepts/limits/overview),不推荐使用 - -[![Deploy with Vercel](https://vercel.com/button)](https://vercel.com/new/clone?demo-title=bingo&demo-description=bingo&demo-url=https%3A%2F%2Fbing.github1s.tk%2F&project-name=bingo&repository-name=bingo&repository-url=https%3A%2F%2Fgithub.com%2Fweaigc%2Fbingo&from=templates&skippable-integrations=1&env=BING_HEADER&envDescription=%E5%A6%82%E6%9E%9C%E4%B8%8D%E7%9F%A5%E9%81%93%E6%80%8E%E4%B9%88%E9%85%8D%E7%BD%AE%E8%AF%B7%E7%82%B9%E5%8F%B3%E4%BE%A7Learn+More&envLink=https%3A%2F%2Fgithub.com%2Fweaigc%2Fbingo%2Fblob%2Fmain%2F.env.example) - -#### 部署到 Render - -[![Deploy to Render](https://render.com/images/deploy-to-render-button.svg)](https://render.com/deploy?repo=https://github.com/weaigc/bingo) -
                                                                                                                                              - -## 环境和依赖 - -- Node.js >= 18 -- Bing AI 的[身份信息](#如何获取-BING_HEADER)) - -## 安装和使用 - -> 由于目前微软封杀比较严重,推荐优先使用 [部署 Huggingface](#部署到-huggingface) 。 - -* 使用 Node 启动 - -```bash -git clone https://github.com/weaigc/bingo.git -npm i # 推荐使用 pnpm i -npm run build -npm run start -``` - -* 使用 Docker 启动 -```bash -docker pull weaigc/bingo -docker run --rm -it -p 7860:7860 weaigc/bingo -# 或者 -docker run --rm -it -e BING_HEADER=xxxx -p 7860:7860 weaigc/bingo -``` - -## 如何获取 BING_HEADER -> 配置了 BING_HEADER 意味着你将自己的账号共享给所有使用此服务的人,如果不需要免登录画图的功能,不建议设置此变量 - -打开 https://www.bing.com 并登录,然后访问 https://www.bing.com/turing/captcha/challenge ,通过人机校验,然后 - -![BING HEADER](./docs/images/curl.png) - -> 复制出来的内容应该如下所示。确认格式无误后,打开 https://effulgent-bubblegum-e2f5df.netlify.app/#dialog=%22settings%22 ,粘贴进去,点击“转成 BING_HEADER 并复制”,然后从剪切板粘贴即可得到。(你也可以先在网页上进行验证) - -以下是格式参考,需要注意的是,网页端保存的格式是以`curl`开头, 而服务端配置的 `BING_HEADER` 是 `base64` 格式,两者不能互通。 -
                                                                                                                                              -正常格式/网页端保存的格式(格式仅供参考) - -``` -curl 'https://www.bing.com/turing/captcha/challenge' \ - -H 'authority: www.bing.com' \ - -H 'accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.7' \ - -H 'accept-language: zh-CN,zh;q=0.9,en;q=0.8,en-GB;q=0.7,en-US;q=0.6' \ - -H 'cache-control: max-age=0' \ - -H 'cookie: MicrosoftApplicationsTelemetryDeviceId=3399c004-fd0e-48ec-bb92-d82a27b2bbd4; _EDGE_V=1; SRCHD=AF=NOFORM; SRCHUID=V=2&GUID=29EBDDA4E6674329ACCF1A0A423C3E98&dmnchg=1; _UR=QS=0&TQS=0; _HPVN=CS=eyJQbiI6eyJDbiI6MSwiU3QiOjAsIlFzIjowLCJQcm9kIjoiUCJ9LCJTYyI6eyJDbiI6MSwiU3QiOjAsIlFzIjowLCJQcm9kIjoiSCJ9LCJReiI6eyJDbiI6MSwiU3QiOjAsIlFzIjowLCJQcm9kIjoiVCJ9LCJBcCI6dHJ1ZSwiTXV0ZSI6dHJ1ZSwiTGFkIjoiMjAyMy0wNy0yNVQwMDowMDowMFoiLCJJb3RkIjowLCJHd2IiOjAsIkRmdCI6bnVsbCwiTXZzIjowLCJGbHQiOjAsIkltcCI6Mn0=; _RwBf=ilt=1&ihpd=1&ispd=0&rc=0&rb=0&gb=0&rg=200&pc=0&mtu=0&rbb=0&g=0&cid=&clo=0&v=1&l=2023-07-25T07:00:00.0000000Z&lft=0001-01-01T00:00:00.0000000&aof=0&o=2&p=&c=&t=0&s=0001-01-01T00:00:00.0000000+00:00&ts=2023-07-25T11:00:31.7111548+00:00&rwred=0&wls=&lka=0&lkt=0&TH=&dci=0; ANON=A=0043C6590EA808ED6E395059FFFFFFFF&E=1c8b&W=1; NAP=V=1.9&E=1c31&C=DnaMSbDN_4efZ_xXqBF3Daorjr53kYqYoaP8YHsupjmiXnysX7a37A&W=1; PPLState=1; KievRPSSecAuth=FABSBBRaTOJILtFsMkpLVWSG6AN6C/svRwNmAAAEgAAACMGUA7EGVSjGEAQBGHtNsc5sNL7unmJsfPJ2t6imfo4BeUJlAia3IpMTtMUy4PU/C5QAzRI5pODtsIee0+blgllXt/5IiWwGjwmdhivsFM597pRPkjARPfwsPhNLPNbJrCPNPHdje4Is78MnCADXw6/NBq2FL8V2/byw2fH6IuAMD2MvN/VvqpEa9ZxiDjZtENj4HEj0mO2SgzjfyEhVAkjvznJqU2rw/Q2tHmX94NAM2kzlzKF/hWPhCCUmu8IHLvCnHDS6mSptvJDDP/sp3ovtzOXkP1mlM/Xju5ftesUvccVEQGffXORa1dE5hEMbKIiKXz1tDdduSXE19g9/+mRMAjaQhpwhI8XmilCTx1adb1Ll5qK+VjC9GNfEZzcbsGBPVaOl+anG8rEMq+Xnhjo7J+NqTNolavHgcuV8kJsCeJZIged33UA8eOZeFo+wAECMguxMoSqgpGH+sthqynvD/FJD6r/tiU2N3uqVq8NE8V37asrN6T14Z0FGBJOe6ET1+PGApm3s11OY9/xhFEB9T5BEPUGEbvRcLcW2ncFQX0EU+xweiPqo1Q1hNUg/dCtSI+lZ7c2H8XheePZavZ0TJQ8oNCSAuKiTqJmI0fVGpwbXwfaADkEipuawz3fIuMJBNgMU0OtA7Hm59v2fGLIBuvi6YeKS6GgVk3BIPf+P/eKahwozrxQZaFnoHTSqMkvct7xCP4atBROfXKf5Ww0CcFKp+2WX9BIskTOo2jjk6bAyyYJ+ElUB1fgLKNk5m/YSMc9iYCLIBMIGN8F0Yvy3tZ7cvh7Ue5Klo98US/I+nW1G7ZJMHRgUO8h8lpneHqEMegKd8gynO4VF7RpCjJkunDmW0Ta+RkXAP619pg0dqHMFkoOgknN78oBbGTV6fJUKotv+vi61kLhAeXZGWoHGCRXh2wUC6YgfPgKA6ESRNHtFn7E5B3HHpLc5rVMDSNhKZYfdhupV4Ezf6+5DhMcZLZhi0kk+ivDiN1gdHlVtSN55xpvf+c+XZDzR0uhgcvgy0LAbmzgk6y4WbYH+LQsMpzNNj+aC72vMiWovWrKh9jY4MYCmdgxsS/skPtLdp18muiEIRXTbZQGUmhxFpJAIbBIsCscMpzL0BgeujxUwM5wr79Sd9r4xwbgSMwmBlBfUHRVBdNyg8feepeJbCS63nD6eHOuLqMRsPIio3w/ki/EAa92UUEiZeavLsMUD/y/qAvWUdzdP5Y+C/TM+CMGS/kGL4LEdY/28MQeTvU1qv1X21kQt2aiaj3pPVL36hAzxbcLgqcMo9oymDRy87kdCXW/+g4oKLtMh6fm/G6W6Y/B01JlxohyyvueHQIG557uzkEkTJ3FnOVODSKBKpb3WZ65rExfV71zSZa25F3GmpaIG6HiYrX2YYhQAkIE9pKEQBHbnwHuwNDGottZTXZw=; WLS=C=9df3f9d8518fae19&N=wen; WLID=pGY8HgWCu4p5XYCOk2oa0+DBdftkMUfmNIn8XtSjSTKsgv/Il7GUlYs0Jpjf/E12jZMgV7x44Dy3fXOgjjUoJx7Y/ClLrLhsk20THksJJoI=; _EDGE_S=F=1&SID=17CF6EE006426448213C7DB907436588&mkt=zh-CN; MUID=225621093D8A6C27301632413C0E6D08; MUIDB=225621093D8A6C27301632413C0E6D08; SUID=A; SNRHOP=I=&TS=; _U=nGyzKQruEsDwLiu65fZFIG6e12hf2lwTJmroW__k8joUJIKmG3OIjayXKGW9dCVR3sNhF76mEVxyW6yjUGPodOfjtSa3s3J_DxMOrEK1BqXCOBI9bC66spAIASV7prsYFlVAJz73jVNENp_tBubLHJy6EbT0BKRe4AjrYkH-9uMnmCKB8Zmyg; _SS=SID=17CF6EE006426448213C7DB907436588&R=0&RB=0&GB=0&RG=200&RP=0&PC=U531; SRCHS=PC=U531; USRLOC=HS=1&ELOC=LAT=22.501529693603516|LON=113.9263687133789|N=%E5%8D%97%E5%B1%B1%E5%8C%BA%EF%BC%8C%E5%B9%BF%E4%B8%9C%E7%9C%81|ELT=2|&CLOC=LAT=22.50153029046461|LON=113.92637070632928|A=733.4464586120832|TS=230726151034|SRC=W; SRCHUSR=DOB=20230725&T=1690384908000&POEX=W; ipv6=hit=1690388509974&t=6; SRCHHPGUSR=HV=1690384945&SRCHLANG=zh-Hans&PV=15.0.0&BRW=MW&BRH=MT&CW=410&CH=794&SCW=410&SCH=794&DPR=1.5&UTC=480&DM=0&WTS=63825879627&PRVCW=410&PRVCH=794&PR=1.5; cct=AjWIBYOoVP-Afq6gWwtx80If6yHn6iBuEVHA1XHdAKpny6Y_CVyi_MSyM94VyMWnjdYkkccVtm3czoIAtXUGQA; GC=AjWIBYOoVP-Afq6gWwtx80If6yHn6iBuEVHA1XHdAKpR3Y_D9Ytcks4Ht6XhadXk75dvhzP4YOUS0UmoEyqyxw' \ - -H 'dnt: 1' \ - -H 'sec-ch-ua: "Chromium";v="116", "Not)A;Brand";v="24", "Microsoft Edge";v="116"' \ - -H 'sec-ch-ua-arch: "x86"' \ - -H 'sec-ch-ua-bitness: "64"' \ - -H 'sec-ch-ua-full-version: "116.0.1938.29"' \ - -H 'sec-ch-ua-full-version-list: "Chromium";v="116.0.5845.42", "Not)A;Brand";v="24.0.0.0", "Microsoft Edge";v="116.0.1938.29"' \ - -H 'sec-ch-ua-mobile: ?0' \ - -H 'sec-ch-ua-model: ""' \ - -H 'sec-ch-ua-platform: "Windows"' \ - -H 'sec-ch-ua-platform-version: "15.0.0"' \ - -H 'sec-fetch-dest: document' \ - -H 'sec-fetch-mode: navigate' \ - -H 'sec-fetch-site: none' \ - -H 'sec-fetch-user: ?1' \ - -H 'sec-ms-gec: B3F47AD4A283CAB374C0451C46AAFD147C6A4DACAFF6A1C13F34B2C72B024494' \ - -H 'sec-ms-gec-version: 1-116.0.1938.29' \ - -H 'upgrade-insecure-requests: 1' \ - -H 'user-agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/116.0.0.0 Safari/537.36 Edg/116.0.0.0' \ - -H 'x-client-data: eyIxIjoiMiIsIjEwIjoiXCJTMGg3R05HOTF2aDQ1TUZSUnZ5NHN2akRmMWdlaVJKenNxNlA3aU1WbnF3PVwiIiwiMiI6IjEiLCIzIjoiMSIsIjQiOiIyMTU4ODQ5NTM4MjY4OTM5NTA3IiwiNSI6IlwiSm9GUWpPTDk3OS9MbkRRZnlCd2N1M2FsOUN3eTZTQmdaMGNYMXBtOWVMZz1cIiIsIjYiOiJiZXRhIiwiNyI6IjE4MDM4ODYyNjQzNSIsIjkiOiJkZXNrdG9wIn0=' \ - -H 'x-edge-shopping-flag: 1' \ - --compressed -``` -
                                                                                                                                              - -
                                                                                                                                              -转成base64之后的格式(BING_HEADER只能使用 base64 之后的格式) - -``` -Y3VybCAnaHR0cHM6Ly93d3cuYmluZy5jb20vdHVyaW5nL2NvbnZlcnNhdGlvbi9jcmVhdGUnIFwgICAtSCAnYXV0aG9yaXR5OiB3d3cuYmluZy5jb20nIFwgICAtSCAnYWNjZXB0OiB0ZXh0L2h0bWwsYXBwbGljYXRpb24veGh0bWwreG1sLGFwcGxpY2F0aW9uL3htbDtxPTAuOSxpbWFnZS93ZWJwLGltYWdlL2FwbmcsKi8qO3E9MC44LGFwcGxpY2F0aW9uL3NpZ25lZC1leGNoYW5nZTt2PWIzO3E9MC43JyBcICAgLUggJ2FjY2VwdC1sYW5ndWFnZTogemgtQ04semg7cT0wLjksZW47cT0wLjgsZW4tR0I7cT0wLjcsZW4tVVM7cT0wLjYnIFwgICAtSCAnY2FjaGUtY29udHJvbDogbWF4LWFnZT0wJyBcICAgLUggJ2Nvb2tpZTogTWljcm9zb2Z0QXBwbGljYXRpb25zVGVsZW1ldHJ5RGV2aWNlSWQ9MzM5OWMwMDQtZmQwZS00OGVjLWJiOTItZDgyYTI3YjJiYmQ0OyBfRURHRV9WPTE7IFNSQ0hEPUFGPU5PRk9STTsgU1JDSFVJRD1WPTImR1VJRD0yOUVCRERBNEU2Njc0MzI5QUNDRjFBMEE0MjNDM0U5OCZkbW5jaGc9MTsgX1VSPVFTPTAmVFFTPTA7IF9IUFZOPUNTPWV5SlFiaUk2ZXlKRGJpSTZNU3dpVTNRaU9qQXNJbEZ6SWpvd0xDSlFjbTlrSWpvaVVDSjlMQ0pUWXlJNmV5SkRiaUk2TVN3aVUzUWlPakFzSWxGeklqb3dMQ0pRY205a0lqb2lTQ0o5TENKUmVpSTZleUpEYmlJNk1Td2lVM1FpT2pBc0lsRnpJam93TENKUWNtOWtJam9pVkNKOUxDSkJjQ0k2ZEhKMVpTd2lUWFYwWlNJNmRISjFaU3dpVEdGa0lqb2lNakF5TXkwd055MHlOVlF3TURvd01Eb3dNRm9pTENKSmIzUmtJam93TENKSGQySWlPakFzSWtSbWRDSTZiblZzYkN3aVRYWnpJam93TENKR2JIUWlPakFzSWtsdGNDSTZNbjA9OyBfUndCZj1pbHQ9MSZpaHBkPTEmaXNwZD0wJnJjPTAmcmI9MCZnYj0wJnJnPTIwMCZwYz0wJm10dT0wJnJiYj0wJmc9MCZjaWQ9JmNsbz0wJnY9MSZsPTIwMjMtMDctMjVUMDc6MDA6MDAuMDAwMDAwMFombGZ0PTAwMDEtMDEtMDFUMDA6MDA6MDAuMDAwMDAwMCZhb2Y9MCZvPTImcD0mYz0mdD0wJnM9MDAwMS0wMS0wMVQwMDowMDowMC4wMDAwMDAwKzAwOjAwJnRzPTIwMjMtMDctMjVUMTE6MDA6MzEuNzExMTU0OCswMDowMCZyd3JlZD0wJndscz0mbGthPTAmbGt0PTAmVEg9JmRjaT0wOyBBTk9OPUE9MDA0M0M2NTkwRUE4MDhFRDZFMzk1MDU5RkZGRkZGRkYmRT0xYzhiJlc9MTsgTkFQPVY9MS45JkU9MWMzMSZDPURuYU1TYkROXzRlZlpfeFhxQkYzRGFvcmpyNTNrWXFZb2FQOFlIc3Vwam1pWG55c1g3YTM3QSZXPTE7IFBQTFN0YXRlPTE7IEtpZXZSUFNTZWNBdXRoPUZBQlNCQlJhVE9KSUx0RnNNa3BMVldTRzZBTjZDL3N2UndObUFBQUVnQUFBQ01HVUE3RUdWU2pHRUFRQkdIdE5zYzVzTkw3dW5tSnNmUEoydDZpbWZvNEJlVUpsQWlhM0lwTVR0TVV5NFBVL0M1UUF6Ukk1cE9EdHNJZWUwK2JsZ2xsWHQvNUlpV3dHandtZGhpdnNGTTU5N3BSUGtqQVJQZndzUGhOTFBOYkpyQ1BOUEhkamU0SXM3OE1uQ0FEWHc2L05CcTJGTDhWMi9ieXcyZkg2SXVBTUQyTXZOL1Z2cXBFYTlaeGlEalp0RU5qNEhFajBtTzJTZ3pqZnlFaFZBa2p2em5KcVUycncvUTJ0SG1YOTROQU0ya3psektGL2hXUGhDQ1VtdThJSEx2Q25IRFM2bVNwdHZKRERQL3NwM292dHpPWGtQMW1sTS9YanU1ZnRlc1V2Y2NWRVFHZmZYT1JhMWRFNWhFTWJLSWlLWHoxdERkZHVTWEUxOWc5LyttUk1BamFRaHB3aEk4WG1pbENUeDFhZGIxTGw1cUsrVmpDOUdOZkVaemNic0dCUFZhT2wrYW5HOHJFTXErWG5oam83SitOcVROb2xhdkhnY3VWOGtKc0NlSlpJZ2VkMzNVQThlT1plRm8rd0FFQ01ndXhNb1NxZ3BHSCtzdGhxeW52RC9GSkQ2ci90aVUyTjN1cVZxOE5FOFYzN2Fzck42VDE0WjBGR0JKT2U2RVQxK1BHQXBtM3MxMU9ZOS94aEZFQjlUNUJFUFVHRWJ2UmNMY1cybmNGUVgwRVUreHdlaVBxbzFRMWhOVWcvZEN0U0krbFo3YzJIOFhoZWVQWmF2WjBUSlE4b05DU0F1S2lUcUptSTBmVkdwd2JYd2ZhQURrRWlwdWF3ejNmSXVNSkJOZ01VME90QTdIbTU5djJmR0xJQnV2aTZZZUtTNkdnVmszQklQZitQL2VLYWh3b3pyeFFaYUZub0hUU3FNa3ZjdDd4Q1A0YXRCUk9mWEtmNVd3MENjRktwKzJXWDlCSXNrVE9vMmpqazZiQXl5WUorRWxVQjFmZ0xLTms1bS9ZU01jOWlZQ0xJQk1JR044RjBZdnkzdFo3Y3ZoN1VlNUtsbzk4VVMvSStuVzFHN1pKTUhSZ1VPOGg4bHBuZUhxRU1lZ0tkOGd5bk80VkY3UnBDakprdW5EbVcwVGErUmtYQVA2MTlwZzBkcUhNRmtvT2drbk43OG9CYkdUVjZmSlVLb3R2K3ZpNjFrTGhBZVhaR1dvSEdDUlhoMndVQzZZZ2ZQZ0tBNkVTUk5IdEZuN0U1QjNISHBMYzVyVk1EU05oS1pZZmRodXBWNEV6ZjYrNURoTWNaTFpoaTBraytpdkRpTjFnZEhsVnRTTjU1eHB2ZitjK1haRHpSMHVoZ2N2Z3kwTEFibXpnazZ5NFdiWUgrTFFzTXB6Tk5qK2FDNzJ2TWlXb3ZXcktoOWpZNE1ZQ21kZ3hzUy9za1B0TGRwMThtdWlFSVJYVGJaUUdVbWh4RnBKQUliQklzQ3NjTXB6TDBCZ2V1anhVd001d3I3OVNkOXI0eHdiZ1NNd21CbEJmVUhSVkJkTnlnOGZlZXBlSmJDUzYzbkQ2ZUhPdUxxTVJzUElpbzN3L2tpL0VBYTkyVVVFaVplYXZMc01VRC95L3FBdldVZHpkUDVZK0MvVE0rQ01HUy9rR0w0TEVkWS8yOE1RZVR2VTFxdjFYMjFrUXQyYWlhajNwUFZMMzZoQXp4YmNMZ3FjTW85b3ltRFJ5ODdrZENYVy8rZzRvS0x0TWg2Zm0vRzZXNlkvQjAxSmx4b2h5eXZ1ZUhRSUc1NTd1emtFa1RKM0ZuT1ZPRFNLQktwYjNXWjY1ckV4ZlY3MXpTWmEyNUYzR21wYUlHNkhpWXJYMllZaFFBa0lFOXBLRVFCSGJud0h1d05ER290dFpUWFp3PTsgV0xTPUM9OWRmM2Y5ZDg1MThmYWUxOSZOPXdlbjsgV0xJRD1wR1k4SGdXQ3U0cDVYWUNPazJvYTArREJkZnRrTVVmbU5JbjhYdFNqU1RLc2d2L0lsN0dVbFlzMEpwamYvRTEyalpNZ1Y3eDQ0RHkzZlhPZ2pqVW9KeDdZL0NsTHJMaHNrMjBUSGtzSkpvST07IF9FREdFX1M9Rj0xJlNJRD0xN0NGNkVFMDA2NDI2NDQ4MjEzQzdEQjkwNzQzNjU4OCZta3Q9emgtQ047IE1VSUQ9MjI1NjIxMDkzRDhBNkMyNzMwMTYzMjQxM0MwRTZEMDg7IE1VSURCPTIyNTYyMTA5M0Q4QTZDMjczMDE2MzI0MTNDMEU2RDA4OyBTVUlEPUE7IFNOUkhPUD1JPSZUUz07IF9VPW5HeXpLUXJ1RXNEd0xpdTY1ZlpGSUc2ZTEyaGYybHdUSm1yb1dfX2s4am9VSklLbUczT0lqYXlYS0dXOWRDVlIzc05oRjc2bUVWeHlXNnlqVUdQb2RPZmp0U2EzczNKX0R4TU9yRUsxQnFYQ09CSTliQzY2c3BBSUFTVjdwcnNZRmxWQUp6NzNqVk5FTnBfdEJ1YkxISnk2RWJUMEJLUmU0QWpyWWtILTl1TW5tQ0tCOFpteWc7IF9TUz1TSUQ9MTdDRjZFRTAwNjQyNjQ0ODIxM0M3REI5MDc0MzY1ODgmUj0wJlJCPTAmR0I9MCZSRz0yMDAmUlA9MCZQQz1VNTMxOyBTUkNIUz1QQz1VNTMxOyBVU1JMT0M9SFM9MSZFTE9DPUxBVD0yMi41MDE1Mjk2OTM2MDM1MTZ8TE9OPTExMy45MjYzNjg3MTMzNzg5fE49JUU1JThEJTk3JUU1JUIxJUIxJUU1JThDJUJBJUVGJUJDJThDJUU1JUI5JUJGJUU0JUI4JTlDJUU3JTlDJTgxfEVMVD0yfCZDTE9DPUxBVD0yMi41MDE1MzAyOTA0NjQ2MXxMT049MTEzLjkyNjM3MDcwNjMyOTI4fEE9NzMzLjQ0NjQ1ODYxMjA4MzJ8VFM9MjMwNzI2MTUxMDM0fFNSQz1XOyBTUkNIVVNSPURPQj0yMDIzMDcyNSZUPTE2OTAzODQ5MDgwMDAmUE9FWD1XOyBpcHY2PWhpdD0xNjkwMzg4NTA5OTc0JnQ9NjsgU1JDSEhQR1VTUj1IVj0xNjkwMzg0OTQ1JlNSQ0hMQU5HPXpoLUhhbnMmUFY9MTUuMC4wJkJSVz1NVyZCUkg9TVQmQ1c9NDEwJkNIPTc5NCZTQ1c9NDEwJlNDSD03OTQmRFBSPTEuNSZVVEM9NDgwJkRNPTAmV1RTPTYzODI1ODc5NjI3JlBSVkNXPTQxMCZQUlZDSD03OTQmUFI9MS41OyBjY3Q9QWpXSUJZT29WUC1BZnE2Z1d3dHg4MElmNnlIbjZpQnVFVkhBMVhIZEFLcG55NllfQ1Z5aV9NU3lNOTRWeU1XbmpkWWtrY2NWdG0zY3pvSUF0WFVHUUE7IEdDPUFqV0lCWU9vVlAtQWZxNmdXd3R4ODBJZjZ5SG42aUJ1RVZIQTFYSGRBS3BSM1lfRDlZdGNrczRIdDZYaGFkWGs3NWR2aHpQNFlPVVMwVW1vRXlxeXh3JyBcICAgLUggJ2RudDogMScgXCAgIC1IICdzZWMtY2gtdWE6ICJDaHJvbWl1bSI7dj0iMTE2IiwgIk5vdClBO0JyYW5kIjt2PSIyNCIsICJNaWNyb3NvZnQgRWRnZSI7dj0iMTE2IicgXCAgIC1IICdzZWMtY2gtdWEtYXJjaDogIng4NiInIFwgICAtSCAnc2VjLWNoLXVhLWJpdG5lc3M6ICI2NCInIFwgICAtSCAnc2VjLWNoLXVhLWZ1bGwtdmVyc2lvbjogIjExNi4wLjE5MzguMjkiJyBcICAgLUggJ3NlYy1jaC11YS1mdWxsLXZlcnNpb24tbGlzdDogIkNocm9taXVtIjt2PSIxMTYuMC41ODQ1LjQyIiwgIk5vdClBO0JyYW5kIjt2PSIyNC4wLjAuMCIsICJNaWNyb3NvZnQgRWRnZSI7dj0iMTE2LjAuMTkzOC4yOSInIFwgICAtSCAnc2VjLWNoLXVhLW1vYmlsZTogPzAnIFwgICAtSCAnc2VjLWNoLXVhLW1vZGVsOiAiIicgXCAgIC1IICdzZWMtY2gtdWEtcGxhdGZvcm06ICJXaW5kb3dzIicgXCAgIC1IICdzZWMtY2gtdWEtcGxhdGZvcm0tdmVyc2lvbjogIjE1LjAuMCInIFwgICAtSCAnc2VjLWZldGNoLWRlc3Q6IGRvY3VtZW50JyBcICAgLUggJ3NlYy1mZXRjaC1tb2RlOiBuYXZpZ2F0ZScgXCAgIC1IICdzZWMtZmV0Y2gtc2l0ZTogbm9uZScgXCAgIC1IICdzZWMtZmV0Y2gtdXNlcjogPzEnIFwgICAtSCAnc2VjLW1zLWdlYzogQjNGNDdBRDRBMjgzQ0FCMzc0QzA0NTFDNDZBQUZEMTQ3QzZBNERBQ0FGRjZBMUMxM0YzNEIyQzcyQjAyNDQ5NCcgXCAgIC1IICdzZWMtbXMtZ2VjLXZlcnNpb246IDEtMTE2LjAuMTkzOC4yOScgXCAgIC1IICd1cGdyYWRlLWluc2VjdXJlLXJlcXVlc3RzOiAxJyBcICAgLUggJ3VzZXItYWdlbnQ6IE1vemlsbGEvNS4wIChXaW5kb3dzIE5UIDEwLjA7IFdpbjY0OyB4NjQpIEFwcGxlV2ViS2l0LzUzNy4zNiAoS0hUTUwsIGxpa2UgR2Vja28pIENocm9tZS8xMTYuMC4wLjAgU2FmYXJpLzUzNy4zNiBFZGcvMTE2LjAuMC4wJyBcICAgLUggJ3gtY2xpZW50LWRhdGE6IGV5SXhJam9pTWlJc0lqRXdJam9pWENKVE1HZzNSMDVIT1RGMmFEUTFUVVpTVW5aNU5ITjJha1JtTVdkbGFWSktlbk54TmxBM2FVMVdibkYzUFZ3aUlpd2lNaUk2SWpFaUxDSXpJam9pTVNJc0lqUWlPaUl5TVRVNE9EUTVOVE00TWpZNE9UTTVOVEEzSWl3aU5TSTZJbHdpU205R1VXcFBURGszT1M5TWJrUlJabmxDZDJOMU0yRnNPVU4zZVRaVFFtZGFNR05ZTVhCdE9XVk1aejFjSWlJc0lqWWlPaUppWlhSaElpd2lOeUk2SWpFNE1ETTRPRFl5TmpRek5TSXNJamtpT2lKa1pYTnJkRzl3SW4wPScgXCAgIC1IICd4LWVkZ2Utc2hvcHBpbmctZmxhZzogMScgXCAgIC0tY29tcHJlc3NlZA== -``` -
                                                                                                                                              - - -## 鸣谢 - - 感谢 [EdgeGPT](https://github.com/acheong08/EdgeGPT) 提供的代理 API 的方法。 - - 感谢 [Vercel AI](https://github.com/vercel-labs/ai-chatbot) 提供的基础脚手架和 [ChatHub](https://github.com/chathub-dev/chathub) [go-proxy-bingai](https://github.com/adams549659584/go-proxy-bingai) 提供的部分代码。 - - -## 答疑及交流 - - - -## License - -MIT © [LICENSE](https://github.com/weaigc/bingo/blob/main/LICENSE). - - diff --git a/spaces/ho11laqe/nnUNet_calvingfront_detection/nnunet/network_architecture/generic_UNet_MTLearly_boundary.py b/spaces/ho11laqe/nnUNet_calvingfront_detection/nnunet/network_architecture/generic_UNet_MTLearly_boundary.py deleted file mode 100644 index f500ebd79b54a494dcf065b4f682833ba1037bc3..0000000000000000000000000000000000000000 --- a/spaces/ho11laqe/nnUNet_calvingfront_detection/nnunet/network_architecture/generic_UNet_MTLearly_boundary.py +++ /dev/null @@ -1,530 +0,0 @@ -# Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - - -from copy import deepcopy -from nnunet.utilities.nd_softmax import softmax_helper -from torch import nn -import torch -import numpy as np -from nnunet.network_architecture.initialization import InitWeights_He -from nnunet.network_architecture.neural_network import SegmentationNetwork -import torch.nn.functional -import matplotlib -import matplotlib.pyplot as plt - - -class ConvDropoutNormNonlin(nn.Module): - """ - fixes a bug in ConvDropoutNormNonlin where lrelu was used regardless of nonlin. Bad. - """ - - def __init__(self, input_channels, output_channels, - conv_op=nn.Conv2d, conv_kwargs=None, - norm_op=nn.BatchNorm2d, norm_op_kwargs=None, - dropout_op=nn.Dropout2d, dropout_op_kwargs=None, - nonlin=nn.LeakyReLU, nonlin_kwargs=None): - super(ConvDropoutNormNonlin, self).__init__() - if nonlin_kwargs is None: - nonlin_kwargs = {'negative_slope': 1e-2, 'inplace': True} - if dropout_op_kwargs is None: - dropout_op_kwargs = {'p': 0.5, 'inplace': True} - if norm_op_kwargs is None: - norm_op_kwargs = {'eps': 1e-5, 'affine': True, 'momentum': 0.1} - if conv_kwargs is None: - conv_kwargs = {'kernel_size': 3, 'stride': 1, 'padding': 1, 'dilation': 1, 'bias': True} - - self.nonlin_kwargs = nonlin_kwargs - self.nonlin = nonlin - self.dropout_op = dropout_op - self.dropout_op_kwargs = dropout_op_kwargs - self.norm_op_kwargs = norm_op_kwargs - self.conv_kwargs = conv_kwargs - self.conv_op = conv_op - self.norm_op = norm_op - - self.conv = self.conv_op(input_channels, output_channels, **self.conv_kwargs) - if self.dropout_op is not None and self.dropout_op_kwargs['p'] is not None and self.dropout_op_kwargs[ - 'p'] > 0: - self.dropout = self.dropout_op(**self.dropout_op_kwargs) - else: - self.dropout = None - self.instnorm = self.norm_op(output_channels, **self.norm_op_kwargs) - self.lrelu = self.nonlin(**self.nonlin_kwargs) - - def forward(self, x): - x = self.conv(x) - if self.dropout is not None: - x = self.dropout(x) - return self.lrelu(self.instnorm(x)) - - -class ConvDropoutNonlinNorm(ConvDropoutNormNonlin): - def forward(self, x): - x = self.conv(x) - if self.dropout is not None: - x = self.dropout(x) - return self.instnorm(self.lrelu(x)) - - -class StackedConvLayers(nn.Module): - def __init__(self, input_feature_channels, output_feature_channels, num_convs, - conv_op=nn.Conv2d, conv_kwargs=None, - norm_op=nn.BatchNorm2d, norm_op_kwargs=None, - dropout_op=nn.Dropout2d, dropout_op_kwargs=None, - nonlin=nn.LeakyReLU, nonlin_kwargs=None, first_stride=None, basic_block=ConvDropoutNormNonlin): - ''' - stacks ConvDropoutNormLReLU layers. initial_stride will only be applied to first layer in the stack. The other - parameters affect all layers - :param input_feature_channels: - :param output_feature_channels: - :param num_convs: - :param dilation: - :param kernel_size: - :param padding: - :param dropout: - :param initial_stride: - :param conv_op: - :param norm_op: - :param dropout_op: - :param inplace: - :param neg_slope: - :param norm_affine: - :param conv_bias: - ''' - self.input_channels = input_feature_channels - self.output_channels = output_feature_channels - - if nonlin_kwargs is None: - nonlin_kwargs = {'negative_slope': 1e-2, 'inplace': True} - if dropout_op_kwargs is None: - dropout_op_kwargs = {'p': 0.5, 'inplace': True} - if norm_op_kwargs is None: - norm_op_kwargs = {'eps': 1e-5, 'affine': True, 'momentum': 0.1} - if conv_kwargs is None: - conv_kwargs = {'kernel_size': 3, 'stride': 1, 'padding': 1, 'dilation': 1, 'bias': True} - - self.nonlin_kwargs = nonlin_kwargs - self.nonlin = nonlin - self.dropout_op = dropout_op - self.dropout_op_kwargs = dropout_op_kwargs - self.norm_op_kwargs = norm_op_kwargs - self.conv_kwargs = conv_kwargs - self.conv_op = conv_op - self.norm_op = norm_op - - if first_stride is not None: - self.conv_kwargs_first_conv = deepcopy(conv_kwargs) - self.conv_kwargs_first_conv['stride'] = first_stride - else: - self.conv_kwargs_first_conv = conv_kwargs - - super(StackedConvLayers, self).__init__() - self.blocks = nn.Sequential( - *([basic_block(input_feature_channels, output_feature_channels, self.conv_op, - self.conv_kwargs_first_conv, - self.norm_op, self.norm_op_kwargs, self.dropout_op, self.dropout_op_kwargs, - self.nonlin, self.nonlin_kwargs)] + - [basic_block(output_feature_channels, output_feature_channels, self.conv_op, - self.conv_kwargs, - self.norm_op, self.norm_op_kwargs, self.dropout_op, self.dropout_op_kwargs, - self.nonlin, self.nonlin_kwargs) for _ in range(num_convs - 1)])) - - def forward(self, x): - return self.blocks(x) - - -def print_module_training_status(module): - if isinstance(module, nn.Conv2d) or isinstance(module, nn.Conv3d) or isinstance(module, nn.Dropout3d) or \ - isinstance(module, nn.Dropout2d) or isinstance(module, nn.Dropout) or isinstance(module, nn.InstanceNorm3d) \ - or isinstance(module, nn.InstanceNorm2d) or isinstance(module, nn.InstanceNorm1d) \ - or isinstance(module, nn.BatchNorm2d) or isinstance(module, nn.BatchNorm3d) or isinstance(module, - nn.BatchNorm1d): - print(str(module), module.training) - - -class Upsample(nn.Module): - def __init__(self, size=None, scale_factor=None, mode='nearest', align_corners=False): - super(Upsample, self).__init__() - self.align_corners = align_corners - self.mode = mode - self.scale_factor = scale_factor - self.size = size - - def forward(self, x): - return nn.functional.interpolate(x, size=self.size, scale_factor=self.scale_factor, mode=self.mode, - align_corners=self.align_corners) - - -class Generic_UNet_MTLearly_boundary(SegmentationNetwork): - DEFAULT_BATCH_SIZE_3D = 2 - DEFAULT_PATCH_SIZE_3D = (64, 192, 160) - SPACING_FACTOR_BETWEEN_STAGES = 2 - BASE_NUM_FEATURES_3D = 30 - MAX_NUMPOOL_3D = 999 - MAX_NUM_FILTERS_3D = 320 - - DEFAULT_PATCH_SIZE_2D = (256, 256) - BASE_NUM_FEATURES_2D = 30 - DEFAULT_BATCH_SIZE_2D = 50 - MAX_NUMPOOL_2D = 999 - MAX_FILTERS_2D = 480 - - use_this_for_batch_size_computation_2D = 19739648 - use_this_for_batch_size_computation_3D = 520000000 # 505789440 - - def __init__(self, input_channels, base_num_features, num_classes, num_pool, num_conv_per_stage=2, - feat_map_mul_on_downscale=2, conv_op=nn.Conv2d, - norm_op=nn.BatchNorm2d, norm_op_kwargs=None, - dropout_op=nn.Dropout2d, dropout_op_kwargs=None, - nonlin=nn.LeakyReLU, nonlin_kwargs=None, deep_supervision=True, dropout_in_localization=False, - final_nonlin=softmax_helper, weightInitializer=InitWeights_He(1e-2), pool_op_kernel_sizes=None, - conv_kernel_sizes=None, - upscale_logits=False, convolutional_pooling=False, convolutional_upsampling=False, - max_num_features=None, basic_block=ConvDropoutNormNonlin, - seg_output_use_bias=False): - """ - basically more flexible than v1, architecture is the same - - Does this look complicated? Nah bro. Functionality > usability - - This does everything you need, including world peace. - - Questions? -> f.isensee@dkfz.de - """ - super(Generic_UNet_MTLearly_boundary, self).__init__() - self.convolutional_upsampling = convolutional_upsampling - self.convolutional_pooling = convolutional_pooling - self.upscale_logits = upscale_logits - if nonlin_kwargs is None: - nonlin_kwargs = {'negative_slope': 1e-2, 'inplace': True} - if dropout_op_kwargs is None: - dropout_op_kwargs = {'p': 0.5, 'inplace': True} - if norm_op_kwargs is None: - norm_op_kwargs = {'eps': 1e-5, 'affine': True, 'momentum': 0.1} - - self.conv_kwargs = {'stride': 1, 'dilation': 1, 'bias': True} - - self.nonlin = nonlin - self.nonlin_kwargs = nonlin_kwargs - self.dropout_op_kwargs = dropout_op_kwargs - self.norm_op_kwargs = norm_op_kwargs - self.weightInitializer = weightInitializer - self.conv_op = conv_op - self.norm_op = norm_op - self.dropout_op = dropout_op - self.num_classes = num_classes - self.final_nonlin = final_nonlin - self._deep_supervision = deep_supervision - self.do_ds = deep_supervision - - if conv_op == nn.Conv2d: - upsample_mode = 'bilinear' - pool_op = nn.MaxPool2d - transpconv = nn.ConvTranspose2d - if pool_op_kernel_sizes is None: - pool_op_kernel_sizes = [(2, 2)] * num_pool - if conv_kernel_sizes is None: - conv_kernel_sizes = [(3, 3)] * (num_pool + 1) - elif conv_op == nn.Conv3d: - upsample_mode = 'trilinear' - pool_op = nn.MaxPool3d - transpconv = nn.ConvTranspose3d - if pool_op_kernel_sizes is None: - pool_op_kernel_sizes = [(2, 2, 2)] * num_pool - if conv_kernel_sizes is None: - conv_kernel_sizes = [(3, 3, 3)] * (num_pool + 1) - else: - raise ValueError("unknown convolution dimensionality, conv op: %s" % str(conv_op)) - - self.input_shape_must_be_divisible_by = np.prod(pool_op_kernel_sizes, 0, dtype=np.int64) - self.pool_op_kernel_sizes = pool_op_kernel_sizes - self.conv_kernel_sizes = conv_kernel_sizes - - self.conv_pad_sizes = [] - for krnl in self.conv_kernel_sizes: - self.conv_pad_sizes.append([1 if i == 3 else 0 for i in krnl]) - - if max_num_features is None: - if self.conv_op == nn.Conv3d: - self.max_num_features = self.MAX_NUM_FILTERS_3D - else: - self.max_num_features = self.MAX_FILTERS_2D - else: - self.max_num_features = max_num_features - - self.conv_blocks_context = [] - self.conv_blocks_localization_1 = [] - self.conv_blocks_localization_2 = [] - self.conv_blocks_localization_3 = [] - self.td = [] - self.tu_1 = [] - self.tu_2 = [] - self.tu_3 = [] - self.seg_outputs_1 = [] - self.seg_outputs_2 = [] - self.seg_outputs_3 = [] - - - output_features = base_num_features - input_features = input_channels - - for d in range(num_pool): - # determine the first stride - if d != 0 and self.convolutional_pooling: - first_stride = pool_op_kernel_sizes[d - 1] - else: - first_stride = None - - self.conv_kwargs['kernel_size'] = self.conv_kernel_sizes[d] - self.conv_kwargs['padding'] = self.conv_pad_sizes[d] - # add convolutions - self.conv_blocks_context.append(StackedConvLayers(input_features, output_features, num_conv_per_stage, - self.conv_op, self.conv_kwargs, self.norm_op, - self.norm_op_kwargs, self.dropout_op, - self.dropout_op_kwargs, self.nonlin, self.nonlin_kwargs, - first_stride, basic_block=basic_block)) - if not self.convolutional_pooling: - self.td.append(pool_op(pool_op_kernel_sizes[d])) - input_features = output_features - output_features = int(np.round(output_features * feat_map_mul_on_downscale)) - - output_features = min(output_features, self.max_num_features) - - # now the bottleneck. - # determine the first stride - if self.convolutional_pooling: - first_stride = pool_op_kernel_sizes[-1] - else: - first_stride = None - - # the output of the last conv must match the number of features from the skip connection if we are not using - # convolutional upsampling. If we use convolutional upsampling then the reduction in feature maps will be - # done by the transposed conv - if self.convolutional_upsampling: - final_num_features = output_features - else: - final_num_features = self.conv_blocks_context[-1].output_channels - - self.conv_kwargs['kernel_size'] = self.conv_kernel_sizes[num_pool] - self.conv_kwargs['padding'] = self.conv_pad_sizes[num_pool] - self.conv_blocks_context.append(nn.Sequential( - StackedConvLayers(input_features, output_features, num_conv_per_stage - 1, self.conv_op, self.conv_kwargs, - self.norm_op, self.norm_op_kwargs, self.dropout_op, self.dropout_op_kwargs, self.nonlin, - self.nonlin_kwargs, first_stride, basic_block=basic_block), - StackedConvLayers(output_features, final_num_features, 1, self.conv_op, self.conv_kwargs, - self.norm_op, self.norm_op_kwargs, self.dropout_op, self.dropout_op_kwargs, self.nonlin, - self.nonlin_kwargs, basic_block=basic_block))) - - # if we don't want to do dropout in the localization pathway then we set the dropout prob to zero here - if not dropout_in_localization: - old_dropout_p = self.dropout_op_kwargs['p'] - self.dropout_op_kwargs['p'] = 0.0 - - # now lets build the localization pathway - for u in range(num_pool): - nfeatures_from_down = final_num_features - nfeatures_from_skip = self.conv_blocks_context[ - -(2 + u)].output_channels # self.conv_blocks_context[-1] is bottleneck, so start with -2 - n_features_after_tu_and_concat = nfeatures_from_skip * 2 - - # the first conv reduces the number of features to match those of skip - # the following convs work on that number of features - # if not convolutional upsampling then the final conv reduces the num of features again - if u != num_pool - 1 and not self.convolutional_upsampling: - final_num_features = self.conv_blocks_context[-(3 + u)].output_channels - else: - final_num_features = nfeatures_from_skip - - if not self.convolutional_upsampling: - self.tu_1.append(Upsample(scale_factor=pool_op_kernel_sizes[-(u + 1)], mode=upsample_mode)) - self.tu_2.append(Upsample(scale_factor=pool_op_kernel_sizes[-(u + 1)], mode=upsample_mode)) - self.tu_3.append(Upsample(scale_factor=pool_op_kernel_sizes[-(u + 1)], mode=upsample_mode)) - else: - self.tu_1.append(transpconv(nfeatures_from_down, nfeatures_from_skip, pool_op_kernel_sizes[-(u + 1)], - pool_op_kernel_sizes[-(u + 1)], bias=False)) - self.tu_2.append(transpconv(nfeatures_from_down, nfeatures_from_skip, pool_op_kernel_sizes[-(u + 1)], - pool_op_kernel_sizes[-(u + 1)], bias=False)) - self.tu_3.append(transpconv(nfeatures_from_down, nfeatures_from_skip, pool_op_kernel_sizes[-(u + 1)], - pool_op_kernel_sizes[-(u + 1)], bias=False)) - - self.conv_kwargs['kernel_size'] = self.conv_kernel_sizes[- (u + 1)] - self.conv_kwargs['padding'] = self.conv_pad_sizes[- (u + 1)] - self.conv_blocks_localization_1.append(nn.Sequential( - StackedConvLayers(n_features_after_tu_and_concat, nfeatures_from_skip, num_conv_per_stage - 1, - self.conv_op, self.conv_kwargs, self.norm_op, self.norm_op_kwargs, self.dropout_op, - self.dropout_op_kwargs, self.nonlin, self.nonlin_kwargs, basic_block=basic_block), - StackedConvLayers(nfeatures_from_skip, final_num_features, 1, self.conv_op, self.conv_kwargs, - self.norm_op, self.norm_op_kwargs, self.dropout_op, self.dropout_op_kwargs, - self.nonlin, self.nonlin_kwargs, basic_block=basic_block) - )) - - self.conv_blocks_localization_2.append(nn.Sequential( - StackedConvLayers(n_features_after_tu_and_concat, nfeatures_from_skip, num_conv_per_stage - 1, - self.conv_op, self.conv_kwargs, self.norm_op, self.norm_op_kwargs, self.dropout_op, - self.dropout_op_kwargs, self.nonlin, self.nonlin_kwargs, basic_block=basic_block), - StackedConvLayers(nfeatures_from_skip, final_num_features, 1, self.conv_op, self.conv_kwargs, - self.norm_op, self.norm_op_kwargs, self.dropout_op, self.dropout_op_kwargs, - self.nonlin, self.nonlin_kwargs, basic_block=basic_block) - )) - self.conv_blocks_localization_3.append(nn.Sequential( - StackedConvLayers(n_features_after_tu_and_concat, nfeatures_from_skip, num_conv_per_stage - 1, - self.conv_op, self.conv_kwargs, self.norm_op, self.norm_op_kwargs, self.dropout_op, - self.dropout_op_kwargs, self.nonlin, self.nonlin_kwargs, basic_block=basic_block), - StackedConvLayers(nfeatures_from_skip, final_num_features, 1, self.conv_op, self.conv_kwargs, - self.norm_op, self.norm_op_kwargs, self.dropout_op, self.dropout_op_kwargs, - self.nonlin, self.nonlin_kwargs, basic_block=basic_block) - )) - - for ds in range(len(self.conv_blocks_localization_1)): - self.seg_outputs_1.append(conv_op(self.conv_blocks_localization_1[ds][-1].output_channels, num_classes[0], - 1, 1, 0, 1, 1, seg_output_use_bias)) - - for ds in range(len(self.conv_blocks_localization_2)): - self.seg_outputs_2.append(conv_op(self.conv_blocks_localization_2[ds][-1].output_channels, num_classes[1], - 1, 1, 0, 1, 1, seg_output_use_bias)) - - for ds in range(len(self.conv_blocks_localization_3)): - self.seg_outputs_3.append(conv_op(self.conv_blocks_localization_3[ds][-1].output_channels, num_classes[2], - 1, 1, 0, 1, 1, seg_output_use_bias)) - - self.upscale_logits_ops = [] - cum_upsample = np.cumprod(np.vstack(pool_op_kernel_sizes), axis=0)[::-1] - for usl in range(num_pool - 1): - if self.upscale_logits: - self.upscale_logits_ops.append(Upsample(scale_factor=tuple([int(i) for i in cum_upsample[usl + 1]]), - mode=upsample_mode)) - else: - self.upscale_logits_ops.append(lambda x: x) - - if not dropout_in_localization: - self.dropout_op_kwargs['p'] = old_dropout_p - - # register all modules properly - self.conv_blocks_context = nn.ModuleList(self.conv_blocks_context) - - self.conv_blocks_localization_1 = nn.ModuleList(self.conv_blocks_localization_1) - self.conv_blocks_localization_2 = nn.ModuleList(self.conv_blocks_localization_2) - self.conv_blocks_localization_2 = nn.ModuleList(self.conv_blocks_localization_3) - - self.td = nn.ModuleList(self.td) - self.tu_1 = nn.ModuleList(self.tu_1) - self.tu_2 = nn.ModuleList(self.tu_2) - self.tu_3 = nn.ModuleList(self.tu_3) - - self.seg_outputs_1 = nn.ModuleList(self.seg_outputs_1) - self.seg_outputs_2 = nn.ModuleList(self.seg_outputs_2) - self.seg_outputs_3 = nn.ModuleList(self.seg_outputs_3) - - if self.upscale_logits: - self.upscale_logits_ops = nn.ModuleList( - self.upscale_logits_ops) # lambda x:x is not a Module so we need to distinguish here - - if self.weightInitializer is not None: - self.apply(self.weightInitializer) - # self.apply(print_module_training_status) - - - - def forward(self, x): - skips = [] - seg_outputs_1 = [] - seg_outputs_2 = [] - seg_outputs_3 = [] - - for d in range(len(self.conv_blocks_context) - 1): - x = self.conv_blocks_context[d](x) - skips.append(x) - if not self.convolutional_pooling: - x = self.td[d](x) - - x1 = self.conv_blocks_context[-1](x) - x2 = x1.clone() - x3 = x1.clone() - - # Decoder 1 - for u in range(len(self.tu_1)): - x1 = self.tu_1[u](x1) - x1 = torch.cat((x1, skips[-(u + 1)]), dim=1) - x1 = self.conv_blocks_localization_1[u](x1) - seg_outputs_1.append(self.final_nonlin(self.seg_outputs_1[u](x1))) - - # Decoder2 - for u in range(len(self.tu_2)): - x2 = self.tu_2[u](x2) - x2 = torch.cat((x2, skips[-(u + 1)]), dim=1) - x2 = self.conv_blocks_localization_2[u](x2) - seg_outputs_2.append(self.final_nonlin(self.seg_outputs_2[u](x2))) - - # Decoder3 - for u in range(len(self.tu_3)): - x3 = self.tu_3[u](x3) - x3 = torch.cat((x3, skips[-(u + 1)]), dim=1) - x3 = self.conv_blocks_localization_3[u](x3) - seg_outputs_3.append(self.final_nonlin(self.seg_outputs_3[u](x3))) - - if self._deep_supervision and self.do_ds: - seg_1 = tuple([seg_outputs_1[-1]] + [i(j) for i, j in - zip(list(self.upscale_logits_ops)[::-1], seg_outputs_1[:-1][::-1])]) - seg_2 = tuple([seg_outputs_2[-1]] + [i(j) for i, j in - zip(list(self.upscale_logits_ops)[::-1], seg_outputs_2[:-1][::-1])]) - seg_3 = tuple([seg_outputs_3[-1]] + [i(j) for i, j in - zip(list(self.upscale_logits_ops)[::-1], seg_outputs_3[:-1][::-1])]) - seg = tuple([torch.cat([s1, s2, s3], dim=1) for s1, s2, s3 in zip(seg_1, seg_2, seg_3)]) - return seg - else: - seg = tuple([torch.cat([s1, s2, s3], dim=1) for s1, s2, s3 in zip(seg_outputs_1, seg_outputs_2, seg_outputs_3)]) - return seg[-1] - - @staticmethod - def compute_approx_vram_consumption(patch_size, num_pool_per_axis, base_num_features, max_num_features, - num_modalities, num_classes, pool_op_kernel_sizes, deep_supervision=False, - conv_per_stage=2): - """ - This only applies for num_conv_per_stage and convolutional_upsampling=True - not real vram consumption. just a constant term to which the vram consumption will be approx proportional - (+ offset for parameter storage) - :param deep_supervision: - :param patch_size: - :param num_pool_per_axis: - :param base_num_features: - :param max_num_features: - :param num_modalities: - :param num_classes: - :param pool_op_kernel_sizes: - :return: - """ - if not isinstance(num_pool_per_axis, np.ndarray): - num_pool_per_axis = np.array(num_pool_per_axis) - - npool = len(pool_op_kernel_sizes) - - map_size = np.array(patch_size) - tmp = np.int64((conv_per_stage * 2 + 1) * np.prod(map_size, dtype=np.int64) * base_num_features + - num_modalities * np.prod(map_size, dtype=np.int64) + - num_classes * np.prod(map_size, dtype=np.int64)) - - num_feat = base_num_features - - for p in range(npool): - for pi in range(len(num_pool_per_axis)): - map_size[pi] /= pool_op_kernel_sizes[p][pi] - num_feat = min(num_feat * 2, max_num_features) - # num_blocks = (conv_per_stage * 2 + 1) if p < (npool - 1) else conv_per_stage # conv_per_stage + conv_per_stage for the convs of encode/decode and 1 for transposed conv - num_blocks = (conv_per_stage * 5 + 1) if p < (npool - 1) else conv_per_stage # conv_per_stage + conv_per_stage for the convs of encode/decode*2 and 1 for transposed conv - tmp += num_blocks * np.prod(map_size, dtype=np.int64) * num_feat - if deep_supervision and p < (npool - 2): - tmp += np.prod(map_size, dtype=np.int64) * num_classes - # print(p, map_size, num_feat, tmp) - return tmp diff --git a/spaces/housexu123/bingo-2.0/src/components/chat-image.tsx b/spaces/housexu123/bingo-2.0/src/components/chat-image.tsx deleted file mode 100644 index 05ecc9771eada27a0f2d160bb01cba170d37bb09..0000000000000000000000000000000000000000 --- a/spaces/housexu123/bingo-2.0/src/components/chat-image.tsx +++ /dev/null @@ -1,170 +0,0 @@ -import { - useEffect, - useState, - useCallback, - ChangeEvent, - ClipboardEvent, - MouseEventHandler, - FormEvent, - useRef -} from "react" -import Image from 'next/image' -import PasteIcon from '@/assets/images/paste.svg' -import UploadIcon from '@/assets/images/upload.svg' -import CameraIcon from '@/assets/images/camera.svg' -import { useBing } from '@/lib/hooks/use-bing' -import { cn } from '@/lib/utils' - -interface ChatImageProps extends Pick, 'uploadImage'> {} - -const preventDefault: MouseEventHandler = (event) => { - event.nativeEvent.stopImmediatePropagation() -} - -const toBase64 = (file: File): Promise => new Promise((resolve, reject) => { - const reader = new FileReader() - reader.readAsDataURL(file) - reader.onload = () => resolve(reader.result as string) - reader.onerror = reject -}) - -export function ChatImage({ children, uploadImage }: React.PropsWithChildren) { - const videoRef = useRef(null) - const canvasRef = useRef(null) - const mediaStream = useRef() - const [panel, setPanel] = useState('none') - - const upload = useCallback((url: string) => { - if (url) { - uploadImage(url) - } - setPanel('none') - }, [panel]) - - const onUpload = useCallback(async (event: ChangeEvent) => { - const file = event.target.files?.[0] - if (file) { - const fileDataUrl = await toBase64(file) - if (fileDataUrl) { - upload(fileDataUrl) - } - } - }, []) - - const onPaste = useCallback((event: ClipboardEvent) => { - const pasteUrl = event.clipboardData.getData('text') ?? '' - upload(pasteUrl) - }, []) - - const onEnter = useCallback((event: FormEvent) => { - event.preventDefault() - event.stopPropagation() - // @ts-ignore - const inputUrl = event.target.elements.image.value - if (inputUrl) { - upload(inputUrl) - } - }, []) - - const openVideo: MouseEventHandler = async (event) => { - event.stopPropagation() - setPanel('camera-mode') - } - - const onCapture = () => { - if (canvasRef.current && videoRef.current) { - const canvas = canvasRef.current - canvas.width = videoRef.current!.videoWidth - canvas.height = videoRef.current!.videoHeight - canvas.getContext('2d')?.drawImage(videoRef.current, 0, 0, canvas.width, canvas.height) - const cameraUrl = canvas.toDataURL('image/jpeg') - upload(cameraUrl) - } - } - - useEffect(() => { - const handleBlur = () => { - if (panel !== 'none') { - setPanel('none') - } - } - document.addEventListener('click', handleBlur) - return () => { - document.removeEventListener('click', handleBlur) - } - }, [panel]) - - useEffect(() => { - if (panel === 'camera-mode') { - navigator.mediaDevices.getUserMedia({ video: true, audio: false }) - .then(videoStream => { - mediaStream.current = videoStream - if (videoRef.current) { - videoRef.current.srcObject = videoStream - } - }) - } else { - if (mediaStream.current) { - mediaStream.current.getTracks().forEach(function(track) { - track.stop() - }) - mediaStream.current = undefined - } - } - }, [panel]) - - return ( -
                                                                                                                                              -
                                                                                                                                              panel === 'none' ? setPanel('normal') : setPanel('none')}>{children}
                                                                                                                                              -
                                                                                                                                              -
                                                                                                                                              -
                                                                                                                                              -

                                                                                                                                              添加图像

                                                                                                                                              -
                                                                                                                                              -
                                                                                                                                              - paste - - e.stopPropagation()} - /> - -
                                                                                                                                              -
                                                                                                                                              - - -
                                                                                                                                              -
                                                                                                                                              - {panel === 'camera-mode' &&
                                                                                                                                              -
                                                                                                                                              -
                                                                                                                                              -
                                                                                                                                              -
                                                                                                                                              -
                                                                                                                                              -
                                                                                                                                              -
                                                                                                                                              } -
                                                                                                                                              -
                                                                                                                                              - ) -} diff --git a/spaces/housexu123/bingo-2.0/src/components/chat-suggestions.tsx b/spaces/housexu123/bingo-2.0/src/components/chat-suggestions.tsx deleted file mode 100644 index 00c2fee295c9e010946046eb71705a5e131f7a5a..0000000000000000000000000000000000000000 --- a/spaces/housexu123/bingo-2.0/src/components/chat-suggestions.tsx +++ /dev/null @@ -1,45 +0,0 @@ -import React, { useMemo } from 'react' -import Image from 'next/image' -import HelpIcon from '@/assets/images/help.svg' -import { SuggestedResponse } from '@/lib/bots/bing/types' -import { useBing } from '@/lib/hooks/use-bing' -import { atom, useAtom } from 'jotai' - -type Suggestions = SuggestedResponse[] -const helpSuggestions = ['为什么不回应某些主题', '告诉我更多关于必应的资迅', '必应如何使用 AI?'].map((text) => ({ text })) -const suggestionsAtom = atom([]) - -type ChatSuggestionsProps = React.ComponentProps<'div'> & Pick, 'setInput'> & { suggestions?: Suggestions } - -export function ChatSuggestions({ setInput, suggestions = [] }: ChatSuggestionsProps) { - const [currentSuggestions, setSuggestions] = useAtom(suggestionsAtom) - const toggleSuggestions = (() => { - if (currentSuggestions === helpSuggestions) { - setSuggestions(suggestions) - } else { - setSuggestions(helpSuggestions) - } - }) - - useMemo(() => { - setSuggestions(suggestions) - window.scrollBy(0, 2000) - }, [suggestions.length]) - - return currentSuggestions?.length ? ( -
                                                                                                                                              -
                                                                                                                                              - - { - currentSuggestions.map(suggestion => ( - - )) - } -
                                                                                                                                              -
                                                                                                                                              - ) : null -} diff --git a/spaces/huaiji3y/bingo-Public/src/components/chat-header.tsx b/spaces/huaiji3y/bingo-Public/src/components/chat-header.tsx deleted file mode 100644 index c6664b8dee61179f844d45c5bd650518fc2cb4c2..0000000000000000000000000000000000000000 --- a/spaces/huaiji3y/bingo-Public/src/components/chat-header.tsx +++ /dev/null @@ -1,12 +0,0 @@ -import LogoIcon from '@/assets/images/logo.svg' -import Image from 'next/image' - -export function ChatHeader() { - return ( -
                                                                                                                                              - logo -
                                                                                                                                              欢迎使用新必应
                                                                                                                                              -
                                                                                                                                              由 AI 支持的网页版 Copilot
                                                                                                                                              -
                                                                                                                                              - ) -} diff --git a/spaces/huggingface-projects/llama-2-13b-chat/USE_POLICY.md b/spaces/huggingface-projects/llama-2-13b-chat/USE_POLICY.md deleted file mode 100644 index abbcc199b2d1e4feb5d7e40c0bd67e1b0ce29e97..0000000000000000000000000000000000000000 --- a/spaces/huggingface-projects/llama-2-13b-chat/USE_POLICY.md +++ /dev/null @@ -1,50 +0,0 @@ -# Llama 2 Acceptable Use Policy - -Meta is committed to promoting safe and fair use of its tools and features, including Llama 2. If you access or use Llama 2, you agree to this Acceptable Use Policy (“Policy”). The most recent copy of this policy can be found at [ai.meta.com/llama/use-policy](http://ai.meta.com/llama/use-policy). - -## Prohibited Uses -We want everyone to use Llama 2 safely and responsibly. You agree you will not use, or allow others to use, Llama 2 to: - -1. Violate the law or others’ rights, including to: - 1. Engage in, promote, generate, contribute to, encourage, plan, incite, or further illegal or unlawful activity or content, such as: - 1. Violence or terrorism - 2. Exploitation or harm to children, including the solicitation, creation, acquisition, or dissemination of child exploitative content or failure to report Child Sexual Abuse Material - 3. Human trafficking, exploitation, and sexual violence - 4. The illegal distribution of information or materials to minors, including obscene materials, or failure to employ legally required age-gating in connection with such information or materials. - 5. Sexual solicitation - 6. Any other criminal activity - 2. Engage in, promote, incite, or facilitate the harassment, abuse, threatening, or bullying of individuals or groups of individuals - 3. Engage in, promote, incite, or facilitate discrimination or other unlawful or harmful conduct in the provision of employment, employment benefits, credit, housing, other economic benefits, or other essential goods and services - 4. Engage in the unauthorized or unlicensed practice of any profession including, but not limited to, financial, legal, medical/health, or related professional practices - 5. Collect, process, disclose, generate, or infer health, demographic, or other sensitive personal or private information about individuals without rights and consents required by applicable laws - 6. Engage in or facilitate any action or generate any content that infringes, misappropriates, or otherwise violates any third-party rights, including the outputs or results of any products or services using the Llama 2 Materials - 7. Create, generate, or facilitate the creation of malicious code, malware, computer viruses or do anything else that could disable, overburden, interfere with or impair the proper working, integrity, operation or appearance of a website or computer system - - - -2. Engage in, promote, incite, facilitate, or assist in the planning or development of activities that present a risk of death or bodily harm to individuals, including use of Llama 2 related to the following: - 1. Military, warfare, nuclear industries or applications, espionage, use for materials or activities that are subject to the International Traffic Arms Regulations (ITAR) maintained by the United States Department of State - 2. Guns and illegal weapons (including weapon development) - 3. Illegal drugs and regulated/controlled substances - 4. Operation of critical infrastructure, transportation technologies, or heavy machinery - 5. Self-harm or harm to others, including suicide, cutting, and eating disorders - 6. Any content intended to incite or promote violence, abuse, or any infliction of bodily harm to an individual - - - -3. Intentionally deceive or mislead others, including use of Llama 2 related to the following: - 1. Generating, promoting, or furthering fraud or the creation or promotion of disinformation - 2. Generating, promoting, or furthering defamatory content, including the creation of defamatory statements, images, or other content - 3. Generating, promoting, or further distributing spam - 4. Impersonating another individual without consent, authorization, or legal right - 5. Representing that the use of Llama 2 or outputs are human-generated - 6. Generating or facilitating false online engagement, including fake reviews and other means of fake online engagement -4. Fail to appropriately disclose to end users any known dangers of your AI system - -Please report any violation of this Policy, software “bug,” or other problems that could lead to a violation of this Policy through one of the following means: - -* Reporting issues with the model: [github.com/facebookresearch/llama](http://github.com/facebookresearch/llama) -* Reporting risky content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback) -* Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info) -* Reporting violations of the Acceptable Use Policy or unlicensed uses of Llama: [LlamaUseReport@meta.com](mailto:LlamaUseReport@meta.com) - diff --git a/spaces/humblepenguin/mental-health-chatbot/app.py b/spaces/humblepenguin/mental-health-chatbot/app.py deleted file mode 100644 index 8b4e33945e86853588476ac23348facaa6732431..0000000000000000000000000000000000000000 --- a/spaces/humblepenguin/mental-health-chatbot/app.py +++ /dev/null @@ -1,18 +0,0 @@ -from transformers import BlenderbotTokenizer, BlenderbotForConditionalGeneration - -#import model class and tokenizer -from transformers import BlenderbotTokenizer, BlenderbotForConditionalGeneration - -#download and setup the model and tokenizer -model_name = 'facebook/blenderbot-400M-distill' -tokenizer = BlenderbotTokenizer.from_pretrained(model_name) -model = BlenderbotForConditionalGeneration.from_pretrained(model_name) - -def func (message): - inputs = tokenizer(message, return_tensors="pt") - result = model.generate(**inputs) - return tokenizer.decode(result[0]).replace('', '').replace('', '') - -import gradio as gr -app = gr.Interface(fn=func, inputs="textbox", outputs="textbox", title="Mental Health Chatbot", css="footer {visibility: hidden}") -app.launch() diff --git a/spaces/hyxue/HiFiFace-inference-demo/Deep3DFaceRecon_pytorch/models/arcface_torch/configs/wf12m_conflict_r50.py b/spaces/hyxue/HiFiFace-inference-demo/Deep3DFaceRecon_pytorch/models/arcface_torch/configs/wf12m_conflict_r50.py deleted file mode 100644 index de94fcb32cad796bda63521e4f81a4f7fe88923b..0000000000000000000000000000000000000000 --- a/spaces/hyxue/HiFiFace-inference-demo/Deep3DFaceRecon_pytorch/models/arcface_torch/configs/wf12m_conflict_r50.py +++ /dev/null @@ -1,28 +0,0 @@ -from easydict import EasyDict as edict - -# make training faster -# our RAM is 256G -# mount -t tmpfs -o size=140G tmpfs /train_tmp - -config = edict() -config.margin_list = (1.0, 0.0, 0.4) -config.network = "r50" -config.resume = False -config.output = None -config.embedding_size = 512 -config.sample_rate = 1.0 -config.interclass_filtering_threshold = 0 -config.fp16 = True -config.weight_decay = 5e-4 -config.batch_size = 128 -config.optimizer = "sgd" -config.lr = 0.1 -config.verbose = 2000 -config.dali = False - -config.rec = "/train_tmp/WebFace12M_Conflict" -config.num_classes = 1017970 -config.num_image = 12720066 -config.num_epoch = 20 -config.warmup_epoch = config.num_epoch // 10 -config.val_targets = [] diff --git a/spaces/hzy123/bingo/src/components/tone-selector.tsx b/spaces/hzy123/bingo/src/components/tone-selector.tsx deleted file mode 100644 index 5c6e464c91f564b895acd121f0a4a79ed9c5c356..0000000000000000000000000000000000000000 --- a/spaces/hzy123/bingo/src/components/tone-selector.tsx +++ /dev/null @@ -1,43 +0,0 @@ -import React from 'react' -import { BingConversationStyle } from '@/lib/bots/bing/types' -import { cn } from '@/lib/utils' - -type ToneItem = { - type: BingConversationStyle, - name: string -} - -const ToneList: ToneItem[] = [ - { name: '有创造力', type: BingConversationStyle.Creative }, - { name: '更平衡', type: BingConversationStyle.Balanced }, - { name: '更精确', type: BingConversationStyle.Precise } -] - -interface ToneSelectorProps { - type: BingConversationStyle | '' - onChange?: (type: BingConversationStyle) => void -} - -export function ToneSelector({ type, onChange }: ToneSelectorProps) { - return ( -
                                                                                                                                              -
                                                                                                                                              - 选择对话样式 -
                                                                                                                                              -
                                                                                                                                              -
                                                                                                                                                - { - ToneList.map(tone => ( -
                                                                                                                                              • onChange?.(tone.type)}> - -
                                                                                                                                              • - )) - } -
                                                                                                                                              -
                                                                                                                                              -
                                                                                                                                              - ) -} diff --git a/spaces/hzy123/bingo/src/components/ui/sheet.tsx b/spaces/hzy123/bingo/src/components/ui/sheet.tsx deleted file mode 100644 index c9f5ce0f81a91067bb013e988a07eb1e6bf6953b..0000000000000000000000000000000000000000 --- a/spaces/hzy123/bingo/src/components/ui/sheet.tsx +++ /dev/null @@ -1,122 +0,0 @@ -'use client' - -import * as React from 'react' -import * as SheetPrimitive from '@radix-ui/react-dialog' - -import { cn } from '@/lib/utils' -import { IconClose } from '@/components/ui/icons' - -const Sheet = SheetPrimitive.Root - -const SheetTrigger = SheetPrimitive.Trigger - -const SheetClose = SheetPrimitive.Close - -const SheetPortal = ({ - className, - children, - ...props -}: SheetPrimitive.DialogPortalProps) => ( - - {children} - -) -SheetPortal.displayName = SheetPrimitive.Portal.displayName - -const SheetOverlay = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, children, ...props }, ref) => ( - -)) -SheetOverlay.displayName = SheetPrimitive.Overlay.displayName - -const SheetContent = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, children, ...props }, ref) => ( - - - {children} - - - Close - - - -)) -SheetContent.displayName = SheetPrimitive.Content.displayName - -const SheetHeader = ({ - className, - ...props -}: React.HTMLAttributes) => ( -
                                                                                                                                              -) -SheetHeader.displayName = 'SheetHeader' - -const SheetFooter = ({ - className, - ...props -}: React.HTMLAttributes) => ( -
                                                                                                                                              -) -SheetFooter.displayName = 'SheetFooter' - -const SheetTitle = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -SheetTitle.displayName = SheetPrimitive.Title.displayName - -const SheetDescription = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -SheetDescription.displayName = SheetPrimitive.Description.displayName - -export { - Sheet, - SheetTrigger, - SheetClose, - SheetContent, - SheetHeader, - SheetFooter, - SheetTitle, - SheetDescription -} diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Arma 3 Tools !!EXCLUSIVE!! Crack Download Pc Kickass.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Arma 3 Tools !!EXCLUSIVE!! Crack Download Pc Kickass.md deleted file mode 100644 index e0f24703dc5e7c7942404170be35e5d3d957a21e..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/Arma 3 Tools !!EXCLUSIVE!! Crack Download Pc Kickass.md +++ /dev/null @@ -1,6 +0,0 @@ -

                                                                                                                                              Arma 3 Tools crack download pc kickass


                                                                                                                                              DOWNLOAD · https://urlin.us/2uEvre



                                                                                                                                              - -List of every PC game checked by System Requirements Lab ... Arma 3 Creator DLC: Global Mobilization - Cold War Germany · Arma 3 DLC Bundle 2 · Arma 3 ... 4d29de3e1b
                                                                                                                                              -
                                                                                                                                              -
                                                                                                                                              -

                                                                                                                                              diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Free Download Nancy Drew Games Full Version BEST.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Free Download Nancy Drew Games Full Version BEST.md deleted file mode 100644 index a60de89e10996785f734827a06a6274284426c59..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/Free Download Nancy Drew Games Full Version BEST.md +++ /dev/null @@ -1,6 +0,0 @@ -

                                                                                                                                              Free Download Nancy Drew Games Full Version


                                                                                                                                              Download ⚙⚙⚙ https://urlin.us/2uEyTv



                                                                                                                                              - -S.K.I. (PC 2001)nDr. N.T. Cracks the Case (PC 2001)nA Swirly Mirror Maze[edit]nA Time to Die (PC 2000)nA Way to Die (PC 2000)nThe Case of the Deadly Deception (PC 2001)nA Time to Speak (PC 2001)nThe Return of the Dark Sister[edit]nThe Eternal Sister[edit]nThe Sacred Circle[edit]nGhosts of Lorelei[edit]nTragedy in the Boulevards[edit]nInto the Murky[edit]nNo Man's Land[edit]nMurder of the Past[edit]nThe Murderous Society[edit]nMurder Most Foul[edit]nCrimson Creme[edit]nThe Open Tomb[edit]nA Case of Bloody Shadows[edit]nCrimson Shadows[edit]nFace of Death (PC 1998)nSneak My Secret Treasure (PC 1998)nFear of the Spirits[edit]nThe Big Witch Hunt[edit]nMurder Witch[edit]nThere's a Witch Haunting You[edit]nA Ghost of An Ancient Magic[edit]nTo Catch a Ghost[edit]nNo Ghost Babies[edit]nGhost Hunting[edit]nGhost Hunt[edit]nDangerous Ghosts[edit]nThe Deadly Ghosts[edit]nDead End[edit]nThe Devil's Ghost[edit]nGravestones Cursed[edit]nA Ghost Eyes[edit]nThe Grave Raiders[edit]nThe Grave of Evil[edit]nGhosts of The Tomb[edit]nSpooks Around the Tomb[edit]nThe Haunted Tomb[edit]nThe Haunted Witch's Wife[edit]nThere's A Witch After You[edit]nThe Dark Witch[edit]nThe Eerie Lady[edit]nRed Chapel[edit]nA Spooky Summer[edit]nClowns 4fefd39f24
                                                                                                                                              -
                                                                                                                                              -
                                                                                                                                              -

                                                                                                                                              diff --git a/spaces/inreVtussa/clothingai/Examples/BEST Download Bhram Torrent.md b/spaces/inreVtussa/clothingai/Examples/BEST Download Bhram Torrent.md deleted file mode 100644 index 16b1973568b97c7f73871a55fb02038163b7eb02..0000000000000000000000000000000000000000 --- a/spaces/inreVtussa/clothingai/Examples/BEST Download Bhram Torrent.md +++ /dev/null @@ -1,78 +0,0 @@ -
                                                                                                                                              -

                                                                                                                                              How to Download Bhram Torrent and Watch the ZEE5 Original Series

                                                                                                                                              -

                                                                                                                                              Bhram is a ZEE5 original series that features Kalki Koechlin, Eijaz Khan, Sanjay Suri and Bhumika Chawla in the lead roles. The series is a psychological thriller that follows a female novelist who suffers from post-traumatic stress disorder (PTSD) and experiences strange visions while working on a story. If you are looking for a gripping and suspenseful series to watch, you can download Bhram torrent and enjoy it on your device.

                                                                                                                                              -

                                                                                                                                              Download Bhram Torrent


                                                                                                                                              DOWNLOAD →→→ https://tiurll.com/2uCkev



                                                                                                                                              -

                                                                                                                                              What is the plot of Bhram?

                                                                                                                                              -

                                                                                                                                              The plot of Bhram revolves around Alisha Khanna, a bestselling romance writer who moves to Shimla with her sister after a car accident that changes her life. However, she soon starts having visions of a girl who died in a mysterious fire 20 years ago. She also meets Peter Paul, a psychiatrist who tries to help her cope with her trauma. As she investigates the mystery behind the girl's death, she uncovers some dark secrets that have been buried for long.

                                                                                                                                              -

                                                                                                                                              How to download Bhram torrent safely and easily?

                                                                                                                                              -

                                                                                                                                              If you want to download Bhram torrent and watch the series online for free, you need to follow some steps to ensure that you get a high-quality and safe torrent file. Here are some tips to download Bhram torrent without any hassle:

                                                                                                                                              -
                                                                                                                                                -
                                                                                                                                              • Use a reliable torrent client such as µTorrent or BitTorrent. These are popular and trusted software that allow you to download torrent files or magnet links from various sources.
                                                                                                                                              • -
                                                                                                                                              • Use a VPN service to protect your privacy and security while downloading torrents. A VPN will hide your IP address and encrypt your traffic, so you can avoid ISP throttling, geo-restrictions and legal issues.
                                                                                                                                              • -
                                                                                                                                              • Use a reputable torrent site to find Bhram torrent. Some of the best torrent sites for ZEE5 shows are LimeTorrents, 1337x, Torrentz2 and RARBG. These sites have a large collection of torrents and magnet links for movies, TV shows, music, games and more.
                                                                                                                                              • -
                                                                                                                                              • Choose a high-quality Bhram torrent with good seeders and leechers. This will ensure that you get a fast and smooth download speed and avoid corrupted or incomplete files.
                                                                                                                                              • -
                                                                                                                                              • Enjoy watching Bhram on your preferred device. You can use VLC media player or any other compatible player to watch the downloaded video files.
                                                                                                                                              • -
                                                                                                                                              -

                                                                                                                                              Why should you watch Bhram?

                                                                                                                                              -

                                                                                                                                              Bhram is a series that will keep you hooked with its captivating storyline, stellar performances and stunning visuals. Here are some reasons why you should watch Bhram:

                                                                                                                                              -

                                                                                                                                              -
                                                                                                                                                -
                                                                                                                                              • Bhram is one of the rare Indian web series that explores the genre of psychological horror/thriller with a female protagonist. It deals with themes such as PTSD, mental health, paranormal phenomena and mythology.
                                                                                                                                              • -
                                                                                                                                              • Bhram features Kalki Koechlin in one of her best roles till date. She portrays the complex character of Alisha Khanna with finesse and intensity. She is supported by an equally talented cast that includes Eijaz Khan, Sanjay Suri and Bhumika Chawla.
                                                                                                                                              • -
                                                                                                                                              • Bhram has a captivating cinematography that captures the beauty and mystery of Shimla. The series also has a haunting background score that adds to the suspense and thrill.
                                                                                                                                              • -
                                                                                                                                              • Bhram has a twisty and unpredictable storyline that will keep you guessing till the end. The series has eight episodes that are packed with drama, suspense and horror.
                                                                                                                                              • -
                                                                                                                                              -

                                                                                                                                              Conclusion

                                                                                                                                              -

                                                                                                                                              Bhram is a ZEE5 original series that you should not miss if you are a fan of psychological thrillers. You can download Bhram torrent from various sources and watch it on your device at your convenience. However, make sure you use a torrent client, a VPN service and a reputable torrent site to download Bhram torrent safely and easily.

                                                                                                                                              -

                                                                                                                                              What are the reviews of Bhram?

                                                                                                                                              -

                                                                                                                                              Bhram has received mixed reviews from critics and viewers. Some have praised the series for its engaging plot, brilliant performances and atmospheric setting. Others have criticized the series for its slow pace, weak direction and lack of originality. The series has a rating of 6.7/10 on IMDb and 2.5/5 on Times of India.

                                                                                                                                              -

                                                                                                                                              What are the alternatives to Bhram?

                                                                                                                                              -

                                                                                                                                              If you are looking for more psychological thriller series to watch, you can check out some of these alternatives to Bhram:

                                                                                                                                              -
                                                                                                                                                -
                                                                                                                                              • The Gone Game: A Voot Select original series that follows the investigation of a mysterious death during the lockdown. The series stars Sanjay Kapoor, Arjun Mathur, Shweta Tripathi and Shriya Pilgaonkar.
                                                                                                                                              • -
                                                                                                                                              • Breathe: A Amazon Prime Video original series that revolves around a father who goes to extreme lengths to save his son's life. The series stars R. Madhavan, Amit Sadh, Hrishikesh Joshi and Sapna Pabbi.
                                                                                                                                              • -
                                                                                                                                              • Asur: A Voot Select original series that blends mythology and forensic science in a thrilling chase between a serial killer and a team of forensic experts. The series stars Arshad Warsi, Barun Sobti, Anupriya Goenka and Ridhi Dogra.
                                                                                                                                              • -
                                                                                                                                              • The Final Call: A ZEE5 original series that follows the passengers and crew of a flight that is hijacked by a suicidal pilot. The series stars Arjun Rampal, Javed Jaffrey, Neeraj Kabi and Sakshi Tanwar.
                                                                                                                                              • -
                                                                                                                                              -

                                                                                                                                              Conclusion

                                                                                                                                              -

                                                                                                                                              Bhram is a ZEE5 original series that you can download Bhram torrent and watch online for free. The series is a psychological thriller that follows a female novelist who suffers from PTSD and experiences strange visions while working on a story. The series has a captivating storyline, stellar performances and stunning visuals. However, you need to use a torrent client, a VPN service and a reputable torrent site to download Bhram torrent safely and easily.

                                                                                                                                              -

                                                                                                                                              What are the benefits of downloading Bhram torrent?

                                                                                                                                              -

                                                                                                                                              Downloading Bhram torrent has some advantages over streaming the series online. Here are some benefits of downloading Bhram torrent:

                                                                                                                                              -
                                                                                                                                                -
                                                                                                                                              • You can watch Bhram offline without any internet connection. This is useful if you have a limited data plan or a slow internet speed.
                                                                                                                                              • -
                                                                                                                                              • You can watch Bhram on any device that supports video playback. You can transfer the downloaded files to your laptop, tablet, smartphone or TV and enjoy the series on a bigger screen.
                                                                                                                                              • -
                                                                                                                                              • You can watch Bhram in HD quality without any buffering or ads. You can choose the resolution and quality of the torrent file according to your preference and device compatibility.
                                                                                                                                              • -
                                                                                                                                              • You can watch Bhram at your own pace and convenience. You can pause, resume, rewind or fast-forward the episodes as you wish.
                                                                                                                                              • -
                                                                                                                                              -

                                                                                                                                              What are the risks of downloading Bhram torrent?

                                                                                                                                              -

                                                                                                                                              Downloading Bhram torrent also has some risks that you need to be aware of. Here are some risks of downloading Bhram torrent:

                                                                                                                                              -
                                                                                                                                                -
                                                                                                                                              • You may download fake or malicious torrent files that can harm your device or compromise your security. You may also download incomplete or corrupted files that can ruin your viewing experience.
                                                                                                                                              • -
                                                                                                                                              • You may violate the copyright laws and face legal consequences. Downloading Bhram torrent without the permission of the creators or distributors is considered illegal and unethical.
                                                                                                                                              • -
                                                                                                                                              • You may expose your IP address and online activity to third parties such as ISPs, hackers or authorities. This can lead to privacy breaches, identity theft or surveillance.
                                                                                                                                              • -
                                                                                                                                              • You may encounter pop-ups, banners or redirects that can annoy you or infect your device with malware. Some torrent sites may also have adult or inappropriate content that can offend you or others.
                                                                                                                                              • -
                                                                                                                                              -

                                                                                                                                              Conclusion

                                                                                                                                              -

                                                                                                                                              Bhram is a ZEE5 original series that you can download Bhram torrent and watch online for free. The series is a psychological thriller that follows a female novelist who suffers from PTSD and experiences strange visions while working on a story. The series has a captivating storyline, stellar performances and stunning visuals. However, you need to use a torrent client, a VPN service and a reputable torrent site to download Bhram torrent safely and easily.

                                                                                                                                              -

                                                                                                                                              What are the best practices for downloading Bhram torrent?

                                                                                                                                              -

                                                                                                                                              Downloading Bhram torrent can be a rewarding experience if you follow some best practices. Here are some tips to make the most of your torrent download:

                                                                                                                                              -
                                                                                                                                                -
                                                                                                                                              • Read the comments and reviews of the torrent file before downloading it. This will help you avoid fake or low-quality files and get feedback from other users.
                                                                                                                                              • -
                                                                                                                                              • Check the file size and format of the torrent file before downloading it. This will help you avoid unwanted or incompatible files and save your storage space.
                                                                                                                                              • -
                                                                                                                                              • Seed the torrent file after downloading it. This will help you share the file with other users and maintain the health of the torrent network.
                                                                                                                                              • -
                                                                                                                                              • Delete the torrent file after watching it. This will help you free up your storage space and avoid cluttering your device.
                                                                                                                                              • -
                                                                                                                                              -

                                                                                                                                              What are the challenges of downloading Bhram torrent?

                                                                                                                                              -

                                                                                                                                              Downloading Bhram torrent can also pose some challenges that you need to overcome. Here are some difficulties that you may face while downloading Bhram torrent:

                                                                                                                                              -
                                                                                                                                                -
                                                                                                                                              • You may face slow download speed or connection issues due to various factors such as network congestion, server overload or ISP interference.
                                                                                                                                              • -
                                                                                                                                              • You may face legal issues or penalties if you download Bhram torrent without proper authorization or permission from the creators or distributors.
                                                                                                                                              • -
                                                                                                                                              • You may face ethical issues or guilt if you download Bhram torrent without supporting the creators or distributors who have invested their time, money and effort in making the series.
                                                                                                                                              • -
                                                                                                                                              • You may face moral issues or dilemmas if you download Bhram torrent without respecting the rights and wishes of the creators or distributors who have created the series for your entertainment and enjoyment.
                                                                                                                                              • -
                                                                                                                                              -

                                                                                                                                              Conclusion

                                                                                                                                              -

                                                                                                                                              Bhram is a ZEE5 original series that you can download Bhram torrent and watch online for free. The series is a psychological thriller that follows a female novelist who suffers from PTSD and experiences strange visions while working on a story. The series has a captivating storyline, stellar performances and stunning visuals. However, you need to use a torrent client, a VPN service and a reputable torrent site to download Bhram torrent safely and easily.

                                                                                                                                              -

                                                                                                                                              Conclusion

                                                                                                                                              -

                                                                                                                                              Bhram is a ZEE5 original series that you can download Bhram torrent and watch online for free. The series is a psychological thriller that follows a female novelist who suffers from PTSD and experiences strange visions while working on a story. The series has a captivating storyline, stellar performances and stunning visuals. However, you need to use a torrent client, a VPN service and a reputable torrent site to download Bhram torrent safely and easily.

                                                                                                                                              3cee63e6c2
                                                                                                                                              -
                                                                                                                                              -
                                                                                                                                              \ No newline at end of file diff --git a/spaces/ivanmeyer/DreamlikeArt-PhotoReal-2.0/style.css b/spaces/ivanmeyer/DreamlikeArt-PhotoReal-2.0/style.css deleted file mode 100644 index fdbef9e64cc6b9f8003698ffa38997ee22a640ac..0000000000000000000000000000000000000000 --- a/spaces/ivanmeyer/DreamlikeArt-PhotoReal-2.0/style.css +++ /dev/null @@ -1,84 +0,0 @@ -#col-container { - max-width: 800px; - margin-left: auto; - margin-right: auto; -} -a { - color: inherit; - text-decoration: underline; -} -.gradio-container { - font-family: 'IBM Plex Sans', sans-serif; -} -.gr-button { - color: white; - border-color: #9d66e5; - background: #9d66e5; -} -input[type='range'] { - accent-color: #9d66e5; -} -.dark input[type='range'] { - accent-color: #dfdfdf; -} -.container { - max-width: 800px; - margin: auto; - padding-top: 1.5rem; -} -#gallery { - min-height: 22rem; - margin-bottom: 15px; - margin-left: auto; - margin-right: auto; - border-bottom-right-radius: .5rem !important; - border-bottom-left-radius: .5rem !important; -} -#gallery>div>.h-full { - min-height: 20rem; -} -.details:hover { - text-decoration: underline; -} -.gr-button { - white-space: nowrap; -} -.gr-button:focus { - border-color: rgb(147 197 253 / var(--tw-border-opacity)); - outline: none; - box-shadow: var(--tw-ring-offset-shadow), var(--tw-ring-shadow), var(--tw-shadow, 0 0 #0000); - --tw-border-opacity: 1; - --tw-ring-offset-shadow: var(--tw-ring-inset) 0 0 0 var(--tw-ring-offset-width) var(--tw-ring-offset-color); - --tw-ring-shadow: var(--tw-ring-inset) 0 0 0 calc(3px var(--tw-ring-offset-width)) var(--tw-ring-color); - --tw-ring-color: rgb(191 219 254 / var(--tw-ring-opacity)); - --tw-ring-opacity: .5; -} -#advanced-options { - margin-bottom: 20px; -} -.footer { - margin-bottom: 45px; - margin-top: 35px; - text-align: center; - border-bottom: 1px solid #e5e5e5; -} -.footer>p { - font-size: .8rem; - display: inline-block; - padding: 0 10px; - transform: translateY(10px); - background: white; -} -.dark .logo{ filter: invert(1); } -.dark .footer { - border-color: #303030; -} -.dark .footer>p { - background: #0b0f19; -} -.acknowledgments h4{ - margin: 1.25em 0 .25em 0; - font-weight: bold; - font-size: 115%; -} - diff --git a/spaces/jackli888/stable-diffusion-webui/extensions/deforum/scripts/deforum_helpers/image_sharpening.py b/spaces/jackli888/stable-diffusion-webui/extensions/deforum/scripts/deforum_helpers/image_sharpening.py deleted file mode 100644 index 6d12b5d5bffa496c245a09d823cbaa9989c52435..0000000000000000000000000000000000000000 --- a/spaces/jackli888/stable-diffusion-webui/extensions/deforum/scripts/deforum_helpers/image_sharpening.py +++ /dev/null @@ -1,22 +0,0 @@ -import cv2 -import numpy as np - -def unsharp_mask(img, kernel_size=(5, 5), sigma=1.0, amount=1.0, threshold=0, mask=None): - if amount == 0: - return img - # Return a sharpened version of the image, using an unsharp mask. - # If mask is not None, only areas under mask are handled - blurred = cv2.GaussianBlur(img, kernel_size, sigma) - sharpened = float(amount + 1) * img - float(amount) * blurred - sharpened = np.maximum(sharpened, np.zeros(sharpened.shape)) - sharpened = np.minimum(sharpened, 255 * np.ones(sharpened.shape)) - sharpened = sharpened.round().astype(np.uint8) - if threshold > 0: - low_contrast_mask = np.absolute(img - blurred) < threshold - np.copyto(sharpened, img, where=low_contrast_mask) - if mask is not None: - mask = np.array(mask) - masked_sharpened = cv2.bitwise_and(sharpened, sharpened, mask=mask) - masked_img = cv2.bitwise_and(img, img, mask=255-mask) - sharpened = cv2.add(masked_img, masked_sharpened) - return sharpened diff --git a/spaces/jacklindsai/is_it_elon_musk/app.py b/spaces/jacklindsai/is_it_elon_musk/app.py deleted file mode 100644 index e77b26b53a665ab7e2702d424a80858f41fd825e..0000000000000000000000000000000000000000 --- a/spaces/jacklindsai/is_it_elon_musk/app.py +++ /dev/null @@ -1,43 +0,0 @@ -import gradio as gr -from transformers import AutoTokenizer, AutoModel -import torch -import torch.nn.functional as F -import numpy as np - -tokenizer = torch.hub.load('huggingface/pytorch-transformers', 'tokenizer', 'bert-base-cased-finetuned-mrpc') -model = torch.hub.load('huggingface/pytorch-transformers', 'modelForSequenceClassification', 'bert-base-cased-finetuned-mrpc') -model = model.from_pretrained("jacklindsai/is_it_elon_musk") - -def preprocess_text(text): - return tokenizer.encode_plus(text, truncation=True, padding='max_length', max_length=48, return_attention_mask=True) - -device = torch.device('cpu') -def pred_is_elon_musk(text): - encoded_text = preprocess_text(text) - ids = encoded_text['input_ids'] - masks = encoded_text['attention_mask'] - ids = torch.Tensor([ids]).to(device, dtype=torch.int32) - masks = torch.Tensor([masks]).to(device, dtype=torch.int32) - results = model(input_ids=ids, token_type_ids=None, - attention_mask=masks) - logis = results['logits'].detach() - prob = F.softmax(logis, dim=1)[0][1] - prediction = np.argmax(logis.numpy(), axis=1).flatten() - output1 = f"The predicted probability is {prob*100: 0.2f}%.\n" - if 0.4 <= prob <= 0.6: - output2 = f"Therefore, maybe it's from Elon Musk or maybe not." - elif prediction[0] == 1: - output2 = f"Therefore, maybe it is from Elon Musk." - else: - output2 = f"Therefore, maybe it is Not from Elon Musk." - return output1 + output2 - -iface = gr.Interface(pred_is_elon_musk, inputs="text", - outputs="text", title='“Is the tweet from Elon Musk?” Classifier', - theme = "huggingface", examples=["Now I'm going to buy McDonald's and fix all the ice cream machines...", - '"Real magic is only a sip away."(Actual slogan of Coca-Cola!!) 🤣🤣', - 'Let’s make Twitter maximum fun!', - 'I hope that even my worst critics remain on Twitter, because that is what free speech means'], - description="This app predicts whether the tweet is from Elon Musk based on a fine-tuned BERT model. The model considers the first 48 words at most.") - -iface.launch(inline=False) \ No newline at end of file diff --git a/spaces/jarvis1997/fr_demo1/app.py b/spaces/jarvis1997/fr_demo1/app.py deleted file mode 100644 index 8e2f3276c0a77c7b3d006561ac7bdc74617e48a6..0000000000000000000000000000000000000000 --- a/spaces/jarvis1997/fr_demo1/app.py +++ /dev/null @@ -1,120 +0,0 @@ -import gradio as gr -import os -import shutil -import torch -from PIL import Image -import argparse -import pathlib - -os.system("git clone https://github.com/yoyo-nb/Thin-Plate-Spline-Motion-Model") -os.chdir("Thin-Plate-Spline-Motion-Model") -os.system("mkdir checkpoints") -os.system("wget -c https://cloud.tsinghua.edu.cn/f/da8d61d012014b12a9e4/?dl=1 -O checkpoints/vox.pth.tar") - - - -title = "# 表情驱动" - - -def get_style_image_path(style_name: str) -> str: - base_path = 'assets' - filenames = { - 'source': 'source.png', - 'driving': 'driving.mp4', - } - return f'{base_path}/{filenames[style_name]}' - - -def get_style_image_markdown_text(style_name: str) -> str: - url = get_style_image_path(style_name) - return f'style image' - - -def update_style_image(style_name: str) -> dict: - text = get_style_image_markdown_text(style_name) - return gr.Markdown.update(value=text) - - -def set_example_image(example: list) -> dict: - return gr.Image.update(value=example[0]) - -def set_example_video(example: list) -> dict: - return gr.Video.update(value=example[0]) - -def inference(img,vid): - if not os.path.exists('temp'): - os.system('mkdir temp') - - img.save("temp/image.jpg", "JPEG") - os.system(f"python demo.py --config config/vox-256.yaml --checkpoint ./checkpoints/vox.pth.tar --source_image 'temp/image.jpg' --driving_video {vid} --result_video './temp/result.mp4' --cpu") - return './temp/result.mp4' - - - -def main(): - with gr.Blocks(theme="huggingface", css='style.css') as demo: - - with gr.Box(): - gr.Markdown('''## Step 1 (Provide Input Face Image) -- Drop an image containing a face to the **Input Image**. - - If there are multiple faces in the image, use Edit button in the upper right corner and crop the input image beforehand. -''') - with gr.Row(): - with gr.Column(): - with gr.Row(): - input_image = gr.Image(label='Input Image', - type="pil") - - with gr.Row(): - paths = sorted(pathlib.Path('assets').glob('*.png')) - example_images = gr.Dataset(components=[input_image], - samples=[[path.as_posix()] - for path in paths]) - - with gr.Box(): - gr.Markdown('''## Step 2 (Select Driving Video) -- Select **Style Driving Video for the face image animation**. -''') - with gr.Row(): - with gr.Column(): - with gr.Row(): - driving_video = gr.Video(label='Driving Video', - format="mp4") - - with gr.Row(): - paths = sorted(pathlib.Path('assets').glob('*.mp4')) - example_video = gr.Dataset(components=[driving_video], - samples=[[path.as_posix()] - for path in paths]) - - with gr.Box(): - gr.Markdown('''## Step 3 (Generate Animated Image based on the Video) -- Hit the **Generate** button. (Note: As it runs on the CPU, it takes ~ 3 minutes to generate final results.) -''') - with gr.Row(): - with gr.Column(): - with gr.Row(): - generate_button = gr.Button('Generate') - - with gr.Column(): - result = gr.Video(type="file", label="Output") - generate_button.click(fn=inference, - inputs=[ - input_image, - driving_video - ], - outputs=result) - example_images.click(fn=set_example_image, - inputs=example_images, - outputs=example_images.components) - example_video.click(fn=set_example_video, - inputs=example_video, - outputs=example_video.components) - - demo.launch( - enable_queue=True, - debug=True - ) - -if __name__ == '__main__': - main() \ No newline at end of file diff --git a/spaces/jbilcke-hf/webapp-factory-llama-node/Dockerfile b/spaces/jbilcke-hf/webapp-factory-llama-node/Dockerfile deleted file mode 100644 index 7840d4adceca6d9de88874739d1898995f323d2e..0000000000000000000000000000000000000000 --- a/spaces/jbilcke-hf/webapp-factory-llama-node/Dockerfile +++ /dev/null @@ -1,29 +0,0 @@ -FROM node:18 - -# Set up a new user named "user" with user ID 1000 -RUN useradd -o -u 1000 user - -# Switch to the "user" user -USER user - -# Set home to the user's home directory -ENV HOME=/home/user \ - PATH=/home/user/.local/bin:$PATH - -# Set the working directory to the user's home directory -WORKDIR $HOME/app - -# Install app dependencies -# A wildcard is used to ensure both package.json AND package-lock.json are copied -# where available (npm@5+) -COPY --chown=user package*.json $HOME/app - -RUN npm install - -# Copy the current directory contents into the container at $HOME/app setting the owner to the user -COPY --chown=user . $HOME/app - -ADD --chown=user https://huggingface.co/TheBloke/airoboros-13b-gpt4-GGML/resolve/main/airoboros-13b-gpt4.ggmlv3.q4_0.bin models/airoboros-13b-gpt4.ggmlv3.q4_0.bin - -EXPOSE 7860 -CMD [ "npm", "run", "start" ] \ No newline at end of file diff --git a/spaces/jitesh/storytelling/src/play_storytelling.py b/spaces/jitesh/storytelling/src/play_storytelling.py deleted file mode 100644 index 28f3395f795a210e6f1655e5bb575ddf049049c8..0000000000000000000000000000000000000000 --- a/spaces/jitesh/storytelling/src/play_storytelling.py +++ /dev/null @@ -1,59 +0,0 @@ -import random - -import numpy as np -import streamlit as st - -from .lib import initialise_storytelling - - -def run_play_storytelling(gen, container_guide, container_param, container_button): - first_sentence, first_emotion, length = initialise_storytelling( - gen, container_guide, container_param, container_button) - # story_till_now = first_sentence - if 'sentence_list' not in st.session_state: - st.session_state.sentence_list = [{'sentence': first_sentence, - 'emotion': first_emotion['label'], - 'score': first_emotion['score']}] - if 'full_story' not in st.session_state: - st.session_state.full_story = first_sentence - container_button = container_button.columns([1, 1, 1]) - heading_container = st.container() - col_turn, col_sentence, col_emo = st.columns([1, 8, 2]) - if container_button[0].button('Run'): - heading_container.markdown(f'### Story') - - # st.text(story_till_now) - full_story, emotion, new_sentence = gen.next_sentence( - st.session_state.full_story, length) - st.session_state.full_story = full_story - st.session_state.sentence_list.append({ - 'sentence': new_sentence, - 'emotion': emotion["label"], - 'score': emotion["score"]}) - # col_sentence.markdown(st.session_state.sentence_list) - for step in st.session_state.sentence_list: - col_turn, col_sentence, col_emo = st.columns([1, 8, 2]) - col_sentence.markdown(step['sentence']) - col_emo.markdown( - f'{step["emotion"]} {np.round(step["score"], 3)}', unsafe_allow_html=False) - - else: - step = st.session_state.sentence_list[0] - # col_sentence.markdown(step['sentence']) - # col_emo.markdown( - # f'{step["emotion"]} {np.round(step["score"], 3)}', unsafe_allow_html=False) - container_guide.markdown( - '### Write the first sentence and then hit the `Run` button') - - if container_button[2].button('Clear'): - - st.session_state.full_story = first_sentence - st.session_state.sentence_list = [{'sentence': first_sentence, - 'emotion': first_emotion['label'], - 'score': first_emotion['score']}] - - st.sidebar.markdown( - ''' - * Click `Run` again to generate the next sentence. - * Click `Clear` twice to reset the story. - ''') diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/Crypto/SelfTest/Cipher/test_OCB.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/Crypto/SelfTest/Cipher/test_OCB.py deleted file mode 100644 index 6e1a47ca36604c38e2b4bd35f35f474ea0ae07ab..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/Crypto/SelfTest/Cipher/test_OCB.py +++ /dev/null @@ -1,845 +0,0 @@ -# =================================================================== -# -# Copyright (c) 2014, Legrandin -# All rights reserved. -# -# Redistribution and use in source and binary forms, with or without -# modification, are permitted provided that the following conditions -# are met: -# -# 1. Redistributions of source code must retain the above copyright -# notice, this list of conditions and the following disclaimer. -# 2. Redistributions in binary form must reproduce the above copyright -# notice, this list of conditions and the following disclaimer in -# the documentation and/or other materials provided with the -# distribution. -# -# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS -# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT -# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS -# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE -# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, -# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, -# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; -# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER -# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT -# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN -# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE -# POSSIBILITY OF SUCH DAMAGE. -# =================================================================== - -import unittest -from binascii import unhexlify - -from Crypto.Util.py3compat import b, tobytes, bchr -from Crypto.Util.number import long_to_bytes -from Crypto.SelfTest.loader import load_test_vectors -from Crypto.SelfTest.st_common import list_test_cases - -from Crypto.Cipher import AES -from Crypto.Hash import SHAKE128 - - -def get_tag_random(tag, length): - return SHAKE128.new(data=tobytes(tag)).read(length) - - -class OcbTests(unittest.TestCase): - - key_128 = get_tag_random("key_128", 16) - nonce_96 = get_tag_random("nonce_128", 12) - data = get_tag_random("data", 128) - - def test_loopback_128(self): - cipher = AES.new(self.key_128, AES.MODE_OCB, nonce=self.nonce_96) - pt = get_tag_random("plaintext", 16 * 100) - ct, mac = cipher.encrypt_and_digest(pt) - - cipher = AES.new(self.key_128, AES.MODE_OCB, nonce=self.nonce_96) - pt2 = cipher.decrypt_and_verify(ct, mac) - self.assertEqual(pt, pt2) - - def test_nonce(self): - # Nonce is optional - AES.new(self.key_128, AES.MODE_OCB) - - cipher = AES.new(self.key_128, AES.MODE_OCB, self.nonce_96) - ct = cipher.encrypt(self.data) - - cipher = AES.new(self.key_128, AES.MODE_OCB, nonce=self.nonce_96) - self.assertEqual(ct, cipher.encrypt(self.data)) - - def test_nonce_must_be_bytes(self): - self.assertRaises(TypeError, AES.new, self.key_128, AES.MODE_OCB, - nonce=u'test12345678') - - def test_nonce_length(self): - # nonce cannot be empty - self.assertRaises(ValueError, AES.new, self.key_128, AES.MODE_OCB, - nonce=b("")) - - # nonce can be up to 15 bytes long - for length in range(1, 16): - AES.new(self.key_128, AES.MODE_OCB, nonce=self.data[:length]) - - self.assertRaises(ValueError, AES.new, self.key_128, AES.MODE_OCB, - nonce=self.data) - - def test_block_size_128(self): - cipher = AES.new(self.key_128, AES.MODE_OCB, nonce=self.nonce_96) - self.assertEqual(cipher.block_size, AES.block_size) - - # By default, a 15 bytes long nonce is randomly generated - nonce1 = AES.new(self.key_128, AES.MODE_OCB).nonce - nonce2 = AES.new(self.key_128, AES.MODE_OCB).nonce - self.assertEqual(len(nonce1), 15) - self.assertNotEqual(nonce1, nonce2) - - def test_nonce_attribute(self): - cipher = AES.new(self.key_128, AES.MODE_OCB, nonce=self.nonce_96) - self.assertEqual(cipher.nonce, self.nonce_96) - - # By default, a 15 bytes long nonce is randomly generated - nonce1 = AES.new(self.key_128, AES.MODE_OCB).nonce - nonce2 = AES.new(self.key_128, AES.MODE_OCB).nonce - self.assertEqual(len(nonce1), 15) - self.assertNotEqual(nonce1, nonce2) - - def test_unknown_parameters(self): - self.assertRaises(TypeError, AES.new, self.key_128, AES.MODE_OCB, - self.nonce_96, 7) - self.assertRaises(TypeError, AES.new, self.key_128, AES.MODE_OCB, - nonce=self.nonce_96, unknown=7) - - # But some are only known by the base cipher - # (e.g. use_aesni consumed by the AES module) - AES.new(self.key_128, AES.MODE_OCB, nonce=self.nonce_96, - use_aesni=False) - - def test_null_encryption_decryption(self): - for func in "encrypt", "decrypt": - cipher = AES.new(self.key_128, AES.MODE_OCB, nonce=self.nonce_96) - result = getattr(cipher, func)(b("")) - self.assertEqual(result, b("")) - - def test_either_encrypt_or_decrypt(self): - cipher = AES.new(self.key_128, AES.MODE_OCB, nonce=self.nonce_96) - cipher.encrypt(b("xyz")) - self.assertRaises(TypeError, cipher.decrypt, b("xyz")) - - cipher = AES.new(self.key_128, AES.MODE_OCB, nonce=self.nonce_96) - cipher.decrypt(b("xyz")) - self.assertRaises(TypeError, cipher.encrypt, b("xyz")) - - def test_data_must_be_bytes(self): - cipher = AES.new(self.key_128, AES.MODE_OCB, nonce=self.nonce_96) - self.assertRaises(TypeError, cipher.encrypt, u'test1234567890-*') - - cipher = AES.new(self.key_128, AES.MODE_OCB, nonce=self.nonce_96) - self.assertRaises(TypeError, cipher.decrypt, u'test1234567890-*') - - def test_mac_len(self): - # Invalid MAC length - self.assertRaises(ValueError, AES.new, self.key_128, AES.MODE_OCB, - nonce=self.nonce_96, mac_len=7) - self.assertRaises(ValueError, AES.new, self.key_128, AES.MODE_OCB, - nonce=self.nonce_96, mac_len=16+1) - - # Valid MAC length - for mac_len in range(8, 16 + 1): - cipher = AES.new(self.key_128, AES.MODE_OCB, nonce=self.nonce_96, - mac_len=mac_len) - _, mac = cipher.encrypt_and_digest(self.data) - self.assertEqual(len(mac), mac_len) - - # Default MAC length - cipher = AES.new(self.key_128, AES.MODE_OCB, nonce=self.nonce_96) - _, mac = cipher.encrypt_and_digest(self.data) - self.assertEqual(len(mac), 16) - - def test_invalid_mac(self): - from Crypto.Util.strxor import strxor_c - cipher = AES.new(self.key_128, AES.MODE_OCB, nonce=self.nonce_96) - ct, mac = cipher.encrypt_and_digest(self.data) - - invalid_mac = strxor_c(mac, 0x01) - - cipher = AES.new(self.key_128, AES.MODE_OCB, nonce=self.nonce_96) - self.assertRaises(ValueError, cipher.decrypt_and_verify, ct, - invalid_mac) - - def test_hex_mac(self): - cipher = AES.new(self.key_128, AES.MODE_OCB, nonce=self.nonce_96) - mac_hex = cipher.hexdigest() - self.assertEqual(cipher.digest(), unhexlify(mac_hex)) - - cipher = AES.new(self.key_128, AES.MODE_OCB, nonce=self.nonce_96) - cipher.hexverify(mac_hex) - - def test_message_chunks(self): - # Validate that both associated data and plaintext/ciphertext - # can be broken up in chunks of arbitrary length - - auth_data = get_tag_random("authenticated data", 127) - plaintext = get_tag_random("plaintext", 127) - - cipher = AES.new(self.key_128, AES.MODE_OCB, nonce=self.nonce_96) - cipher.update(auth_data) - ciphertext, ref_mac = cipher.encrypt_and_digest(plaintext) - - def break_up(data, chunk_length): - return [data[i:i+chunk_length] for i in range(0, len(data), - chunk_length)] - - # Encryption - for chunk_length in 1, 2, 3, 7, 10, 13, 16, 40, 80, 128: - - cipher = AES.new(self.key_128, AES.MODE_OCB, nonce=self.nonce_96) - - for chunk in break_up(auth_data, chunk_length): - cipher.update(chunk) - pt2 = b("") - for chunk in break_up(ciphertext, chunk_length): - pt2 += cipher.decrypt(chunk) - pt2 += cipher.decrypt() - self.assertEqual(plaintext, pt2) - cipher.verify(ref_mac) - - # Decryption - for chunk_length in 1, 2, 3, 7, 10, 13, 16, 40, 80, 128: - - cipher = AES.new(self.key_128, AES.MODE_OCB, nonce=self.nonce_96) - - for chunk in break_up(auth_data, chunk_length): - cipher.update(chunk) - ct2 = b("") - for chunk in break_up(plaintext, chunk_length): - ct2 += cipher.encrypt(chunk) - ct2 += cipher.encrypt() - self.assertEqual(ciphertext, ct2) - self.assertEqual(cipher.digest(), ref_mac) - - def test_bytearray(self): - - # Encrypt - key_ba = bytearray(self.key_128) - nonce_ba = bytearray(self.nonce_96) - header_ba = bytearray(self.data) - data_ba = bytearray(self.data) - - cipher1 = AES.new(self.key_128, - AES.MODE_OCB, - nonce=self.nonce_96) - cipher1.update(self.data) - ct = cipher1.encrypt(self.data) + cipher1.encrypt() - tag = cipher1.digest() - - cipher2 = AES.new(key_ba, - AES.MODE_OCB, - nonce=nonce_ba) - key_ba[:3] = b"\xFF\xFF\xFF" - nonce_ba[:3] = b"\xFF\xFF\xFF" - cipher2.update(header_ba) - header_ba[:3] = b"\xFF\xFF\xFF" - ct_test = cipher2.encrypt(data_ba) + cipher2.encrypt() - data_ba[:3] = b"\xFF\xFF\xFF" - tag_test = cipher2.digest() - - self.assertEqual(ct, ct_test) - self.assertEqual(tag, tag_test) - self.assertEqual(cipher1.nonce, cipher2.nonce) - - # Decrypt - key_ba = bytearray(self.key_128) - nonce_ba = bytearray(self.nonce_96) - header_ba = bytearray(self.data) - del data_ba - - cipher4 = AES.new(key_ba, - AES.MODE_OCB, - nonce=nonce_ba) - key_ba[:3] = b"\xFF\xFF\xFF" - nonce_ba[:3] = b"\xFF\xFF\xFF" - cipher4.update(header_ba) - header_ba[:3] = b"\xFF\xFF\xFF" - pt_test = cipher4.decrypt_and_verify(bytearray(ct_test), bytearray(tag_test)) - - self.assertEqual(self.data, pt_test) - - def test_memoryview(self): - - # Encrypt - key_mv = memoryview(bytearray(self.key_128)) - nonce_mv = memoryview(bytearray(self.nonce_96)) - header_mv = memoryview(bytearray(self.data)) - data_mv = memoryview(bytearray(self.data)) - - cipher1 = AES.new(self.key_128, - AES.MODE_OCB, - nonce=self.nonce_96) - cipher1.update(self.data) - ct = cipher1.encrypt(self.data) + cipher1.encrypt() - tag = cipher1.digest() - - cipher2 = AES.new(key_mv, - AES.MODE_OCB, - nonce=nonce_mv) - key_mv[:3] = b"\xFF\xFF\xFF" - nonce_mv[:3] = b"\xFF\xFF\xFF" - cipher2.update(header_mv) - header_mv[:3] = b"\xFF\xFF\xFF" - ct_test = cipher2.encrypt(data_mv) + cipher2.encrypt() - data_mv[:3] = b"\xFF\xFF\xFF" - tag_test = cipher2.digest() - - self.assertEqual(ct, ct_test) - self.assertEqual(tag, tag_test) - self.assertEqual(cipher1.nonce, cipher2.nonce) - - # Decrypt - key_mv = memoryview(bytearray(self.key_128)) - nonce_mv = memoryview(bytearray(self.nonce_96)) - header_mv = memoryview(bytearray(self.data)) - del data_mv - - cipher4 = AES.new(key_mv, - AES.MODE_OCB, - nonce=nonce_mv) - key_mv[:3] = b"\xFF\xFF\xFF" - nonce_mv[:3] = b"\xFF\xFF\xFF" - cipher4.update(header_mv) - header_mv[:3] = b"\xFF\xFF\xFF" - pt_test = cipher4.decrypt_and_verify(memoryview(ct_test), memoryview(tag_test)) - - self.assertEqual(self.data, pt_test) - - -class OcbFSMTests(unittest.TestCase): - - key_128 = get_tag_random("key_128", 16) - nonce_96 = get_tag_random("nonce_128", 12) - data = get_tag_random("data", 128) - - def test_valid_init_encrypt_decrypt_digest_verify(self): - # No authenticated data, fixed plaintext - # Verify path INIT->ENCRYPT->ENCRYPT(NONE)->DIGEST - cipher = AES.new(self.key_128, AES.MODE_OCB, - nonce=self.nonce_96) - ct = cipher.encrypt(self.data) - ct += cipher.encrypt() - mac = cipher.digest() - - # Verify path INIT->DECRYPT->DECRYPT(NONCE)->VERIFY - cipher = AES.new(self.key_128, AES.MODE_OCB, - nonce=self.nonce_96) - cipher.decrypt(ct) - cipher.decrypt() - cipher.verify(mac) - - def test_invalid_init_encrypt_decrypt_digest_verify(self): - # No authenticated data, fixed plaintext - # Verify path INIT->ENCRYPT->DIGEST - cipher = AES.new(self.key_128, AES.MODE_OCB, - nonce=self.nonce_96) - ct = cipher.encrypt(self.data) - self.assertRaises(TypeError, cipher.digest) - - # Verify path INIT->DECRYPT->VERIFY - cipher = AES.new(self.key_128, AES.MODE_OCB, - nonce=self.nonce_96) - cipher.decrypt(ct) - self.assertRaises(TypeError, cipher.verify) - - def test_valid_init_update_digest_verify(self): - # No plaintext, fixed authenticated data - # Verify path INIT->UPDATE->DIGEST - cipher = AES.new(self.key_128, AES.MODE_OCB, - nonce=self.nonce_96) - cipher.update(self.data) - mac = cipher.digest() - - # Verify path INIT->UPDATE->VERIFY - cipher = AES.new(self.key_128, AES.MODE_OCB, - nonce=self.nonce_96) - cipher.update(self.data) - cipher.verify(mac) - - def test_valid_full_path(self): - # Fixed authenticated data, fixed plaintext - # Verify path INIT->UPDATE->ENCRYPT->ENCRYPT(NONE)->DIGEST - cipher = AES.new(self.key_128, AES.MODE_OCB, - nonce=self.nonce_96) - cipher.update(self.data) - ct = cipher.encrypt(self.data) - ct += cipher.encrypt() - mac = cipher.digest() - - # Verify path INIT->UPDATE->DECRYPT->DECRYPT(NONE)->VERIFY - cipher = AES.new(self.key_128, AES.MODE_OCB, - nonce=self.nonce_96) - cipher.update(self.data) - cipher.decrypt(ct) - cipher.decrypt() - cipher.verify(mac) - - # Verify path INIT->UPDATE->ENCRYPT->ENCRYPT_AND_DIGEST - cipher = AES.new(self.key_128, AES.MODE_OCB, - nonce=self.nonce_96) - cipher.update(self.data) - ct1 = cipher.encrypt(self.data[:2]) - ct2, mac = cipher.encrypt_and_digest(self.data[2:]) - - # Verify path INIT->UPDATE->DECRYPT->DECRYPT_AND_VERIFY - cipher = AES.new(self.key_128, AES.MODE_OCB, - nonce=self.nonce_96) - cipher.update(self.data) - cipher.decrypt(ct1) - cipher.decrypt_and_verify(ct2, mac) - - def test_invalid_encrypt_after_final(self): - cipher = AES.new(self.key_128, AES.MODE_OCB, - nonce=self.nonce_96) - cipher.update(self.data) - cipher.encrypt(self.data) - cipher.encrypt() - self.assertRaises(TypeError, cipher.encrypt, self.data) - - def test_invalid_decrypt_after_final(self): - cipher = AES.new(self.key_128, AES.MODE_OCB, - nonce=self.nonce_96) - cipher.update(self.data) - cipher.decrypt(self.data) - cipher.decrypt() - self.assertRaises(TypeError, cipher.decrypt, self.data) - - def test_valid_init_digest(self): - # Verify path INIT->DIGEST - cipher = AES.new(self.key_128, AES.MODE_OCB, nonce=self.nonce_96) - cipher.digest() - - def test_valid_init_verify(self): - # Verify path INIT->VERIFY - cipher = AES.new(self.key_128, AES.MODE_OCB, nonce=self.nonce_96) - mac = cipher.digest() - - cipher = AES.new(self.key_128, AES.MODE_OCB, nonce=self.nonce_96) - cipher.verify(mac) - - def test_valid_multiple_encrypt_or_decrypt(self): - for method_name in "encrypt", "decrypt": - for auth_data in (None, b("333"), self.data, - self.data + b("3")): - cipher = AES.new(self.key_128, AES.MODE_OCB, - nonce=self.nonce_96) - if auth_data is not None: - cipher.update(auth_data) - method = getattr(cipher, method_name) - method(self.data) - method(self.data) - method(self.data) - method(self.data) - method() - - def test_valid_multiple_digest_or_verify(self): - # Multiple calls to digest - cipher = AES.new(self.key_128, AES.MODE_OCB, nonce=self.nonce_96) - cipher.update(self.data) - first_mac = cipher.digest() - for x in range(4): - self.assertEqual(first_mac, cipher.digest()) - - # Multiple calls to verify - cipher = AES.new(self.key_128, AES.MODE_OCB, nonce=self.nonce_96) - cipher.update(self.data) - for x in range(5): - cipher.verify(first_mac) - - def test_valid_encrypt_and_digest_decrypt_and_verify(self): - # encrypt_and_digest - cipher = AES.new(self.key_128, AES.MODE_OCB, nonce=self.nonce_96) - cipher.update(self.data) - ct, mac = cipher.encrypt_and_digest(self.data) - - # decrypt_and_verify - cipher = AES.new(self.key_128, AES.MODE_OCB, nonce=self.nonce_96) - cipher.update(self.data) - pt = cipher.decrypt_and_verify(ct, mac) - self.assertEqual(self.data, pt) - - def test_invalid_mixing_encrypt_decrypt(self): - # Once per method, with or without assoc. data - for method1_name, method2_name in (("encrypt", "decrypt"), - ("decrypt", "encrypt")): - for assoc_data_present in (True, False): - cipher = AES.new(self.key_128, AES.MODE_OCB, - nonce=self.nonce_96) - if assoc_data_present: - cipher.update(self.data) - getattr(cipher, method1_name)(self.data) - self.assertRaises(TypeError, getattr(cipher, method2_name), - self.data) - - def test_invalid_encrypt_or_update_after_digest(self): - for method_name in "encrypt", "update": - cipher = AES.new(self.key_128, AES.MODE_OCB, nonce=self.nonce_96) - cipher.encrypt(self.data) - cipher.encrypt() - cipher.digest() - self.assertRaises(TypeError, getattr(cipher, method_name), - self.data) - - cipher = AES.new(self.key_128, AES.MODE_OCB, nonce=self.nonce_96) - cipher.encrypt_and_digest(self.data) - - def test_invalid_decrypt_or_update_after_verify(self): - cipher = AES.new(self.key_128, AES.MODE_OCB, nonce=self.nonce_96) - ct = cipher.encrypt(self.data) - ct += cipher.encrypt() - mac = cipher.digest() - - for method_name in "decrypt", "update": - cipher = AES.new(self.key_128, AES.MODE_OCB, nonce=self.nonce_96) - cipher.decrypt(ct) - cipher.decrypt() - cipher.verify(mac) - self.assertRaises(TypeError, getattr(cipher, method_name), - self.data) - - cipher = AES.new(self.key_128, AES.MODE_OCB, nonce=self.nonce_96) - cipher.decrypt_and_verify(ct, mac) - self.assertRaises(TypeError, getattr(cipher, method_name), - self.data) - - -def algo_rfc7253(keylen, taglen, noncelen): - """Implement the algorithm at page 18 of RFC 7253""" - - key = bchr(0) * (keylen // 8 - 1) + bchr(taglen) - C = b"" - - for i in range(128): - S = bchr(0) * i - - N = long_to_bytes(3 * i + 1, noncelen // 8) - cipher = AES.new(key, AES.MODE_OCB, nonce=N, mac_len=taglen // 8) - cipher.update(S) - C += cipher.encrypt(S) + cipher.encrypt() + cipher.digest() - - N = long_to_bytes(3 * i + 2, noncelen // 8) - cipher = AES.new(key, AES.MODE_OCB, nonce=N, mac_len=taglen // 8) - C += cipher.encrypt(S) + cipher.encrypt() + cipher.digest() - - N = long_to_bytes(3 * i + 3, noncelen // 8) - cipher = AES.new(key, AES.MODE_OCB, nonce=N, mac_len=taglen // 8) - cipher.update(S) - C += cipher.encrypt() + cipher.digest() - - N = long_to_bytes(385, noncelen // 8) - cipher = AES.new(key, AES.MODE_OCB, nonce=N, mac_len=taglen // 8) - cipher.update(C) - return cipher.encrypt() + cipher.digest() - - -class OcbRfc7253Test(unittest.TestCase): - - # Tuple with - # - nonce - # - authenticated data - # - plaintext - # - ciphertext and 16 byte MAC tag - tv1_key = "000102030405060708090A0B0C0D0E0F" - tv1 = ( - ( - "BBAA99887766554433221100", - "", - "", - "785407BFFFC8AD9EDCC5520AC9111EE6" - ), - ( - "BBAA99887766554433221101", - "0001020304050607", - "0001020304050607", - "6820B3657B6F615A5725BDA0D3B4EB3A257C9AF1F8F03009" - ), - ( - "BBAA99887766554433221102", - "0001020304050607", - "", - "81017F8203F081277152FADE694A0A00" - ), - ( - "BBAA99887766554433221103", - "", - "0001020304050607", - "45DD69F8F5AAE72414054CD1F35D82760B2CD00D2F99BFA9" - ), - ( - "BBAA99887766554433221104", - "000102030405060708090A0B0C0D0E0F", - "000102030405060708090A0B0C0D0E0F", - "571D535B60B277188BE5147170A9A22C3AD7A4FF3835B8C5" - "701C1CCEC8FC3358" - ), - ( - "BBAA99887766554433221105", - "000102030405060708090A0B0C0D0E0F", - "", - "8CF761B6902EF764462AD86498CA6B97" - ), - ( - "BBAA99887766554433221106", - "", - "000102030405060708090A0B0C0D0E0F", - "5CE88EC2E0692706A915C00AEB8B2396F40E1C743F52436B" - "DF06D8FA1ECA343D" - ), - ( - "BBAA99887766554433221107", - "000102030405060708090A0B0C0D0E0F1011121314151617", - "000102030405060708090A0B0C0D0E0F1011121314151617", - "1CA2207308C87C010756104D8840CE1952F09673A448A122" - "C92C62241051F57356D7F3C90BB0E07F" - ), - ( - "BBAA99887766554433221108", - "000102030405060708090A0B0C0D0E0F1011121314151617", - "", - "6DC225A071FC1B9F7C69F93B0F1E10DE" - ), - ( - "BBAA99887766554433221109", - "", - "000102030405060708090A0B0C0D0E0F1011121314151617", - "221BD0DE7FA6FE993ECCD769460A0AF2D6CDED0C395B1C3C" - "E725F32494B9F914D85C0B1EB38357FF" - ), - ( - "BBAA9988776655443322110A", - "000102030405060708090A0B0C0D0E0F1011121314151617" - "18191A1B1C1D1E1F", - "000102030405060708090A0B0C0D0E0F1011121314151617" - "18191A1B1C1D1E1F", - "BD6F6C496201C69296C11EFD138A467ABD3C707924B964DE" - "AFFC40319AF5A48540FBBA186C5553C68AD9F592A79A4240" - ), - ( - "BBAA9988776655443322110B", - "000102030405060708090A0B0C0D0E0F1011121314151617" - "18191A1B1C1D1E1F", - "", - "FE80690BEE8A485D11F32965BC9D2A32" - ), - ( - "BBAA9988776655443322110C", - "", - "000102030405060708090A0B0C0D0E0F1011121314151617" - "18191A1B1C1D1E1F", - "2942BFC773BDA23CABC6ACFD9BFD5835BD300F0973792EF4" - "6040C53F1432BCDFB5E1DDE3BC18A5F840B52E653444D5DF" - ), - ( - "BBAA9988776655443322110D", - "000102030405060708090A0B0C0D0E0F1011121314151617" - "18191A1B1C1D1E1F2021222324252627", - "000102030405060708090A0B0C0D0E0F1011121314151617" - "18191A1B1C1D1E1F2021222324252627", - "D5CA91748410C1751FF8A2F618255B68A0A12E093FF45460" - "6E59F9C1D0DDC54B65E8628E568BAD7AED07BA06A4A69483" - "A7035490C5769E60" - ), - ( - "BBAA9988776655443322110E", - "000102030405060708090A0B0C0D0E0F1011121314151617" - "18191A1B1C1D1E1F2021222324252627", - "", - "C5CD9D1850C141E358649994EE701B68" - ), - ( - "BBAA9988776655443322110F", - "", - "000102030405060708090A0B0C0D0E0F1011121314151617" - "18191A1B1C1D1E1F2021222324252627", - "4412923493C57D5DE0D700F753CCE0D1D2D95060122E9F15" - "A5DDBFC5787E50B5CC55EE507BCB084E479AD363AC366B95" - "A98CA5F3000B1479" - ) - ) - - # Tuple with - # - key - # - nonce - # - authenticated data - # - plaintext - # - ciphertext and 12 byte MAC tag - tv2 = ( - "0F0E0D0C0B0A09080706050403020100", - "BBAA9988776655443322110D", - "000102030405060708090A0B0C0D0E0F1011121314151617" - "18191A1B1C1D1E1F2021222324252627", - "000102030405060708090A0B0C0D0E0F1011121314151617" - "18191A1B1C1D1E1F2021222324252627", - "1792A4E31E0755FB03E31B22116E6C2DDF9EFD6E33D536F1" - "A0124B0A55BAE884ED93481529C76B6AD0C515F4D1CDD4FD" - "AC4F02AA" - ) - - # Tuple with - # - key length - # - MAC tag length - # - Expected output - tv3 = ( - (128, 128, "67E944D23256C5E0B6C61FA22FDF1EA2"), - (192, 128, "F673F2C3E7174AAE7BAE986CA9F29E17"), - (256, 128, "D90EB8E9C977C88B79DD793D7FFA161C"), - (128, 96, "77A3D8E73589158D25D01209"), - (192, 96, "05D56EAD2752C86BE6932C5E"), - (256, 96, "5458359AC23B0CBA9E6330DD"), - (128, 64, "192C9B7BD90BA06A"), - (192, 64, "0066BC6E0EF34E24"), - (256, 64, "7D4EA5D445501CBE"), - ) - - def test1(self): - key = unhexlify(b(self.tv1_key)) - for tv in self.tv1: - nonce, aad, pt, ct = [unhexlify(b(x)) for x in tv] - ct, mac_tag = ct[:-16], ct[-16:] - - cipher = AES.new(key, AES.MODE_OCB, nonce=nonce) - cipher.update(aad) - ct2 = cipher.encrypt(pt) + cipher.encrypt() - self.assertEqual(ct, ct2) - self.assertEqual(mac_tag, cipher.digest()) - - cipher = AES.new(key, AES.MODE_OCB, nonce=nonce) - cipher.update(aad) - pt2 = cipher.decrypt(ct) + cipher.decrypt() - self.assertEqual(pt, pt2) - cipher.verify(mac_tag) - - def test2(self): - - key, nonce, aad, pt, ct = [unhexlify(b(x)) for x in self.tv2] - ct, mac_tag = ct[:-12], ct[-12:] - - cipher = AES.new(key, AES.MODE_OCB, nonce=nonce, mac_len=12) - cipher.update(aad) - ct2 = cipher.encrypt(pt) + cipher.encrypt() - self.assertEqual(ct, ct2) - self.assertEqual(mac_tag, cipher.digest()) - - cipher = AES.new(key, AES.MODE_OCB, nonce=nonce, mac_len=12) - cipher.update(aad) - pt2 = cipher.decrypt(ct) + cipher.decrypt() - self.assertEqual(pt, pt2) - cipher.verify(mac_tag) - - def test3(self): - for keylen, taglen, result in self.tv3: - result2 = algo_rfc7253(keylen, taglen, 96) - self.assertEqual(unhexlify(b(result)), result2) - - -class OcbDkgTest(unittest.TestCase): - """Test vectors from https://gitlab.com/dkg/ocb-test-vectors""" - - def test_1_2(self): - tvs = [] - for fi in (1, 2): - for nb in (104, 112, 120): - tv_file = load_test_vectors(("Cipher", "AES"), - "test-vector-%d-nonce%d.txt" % (fi, nb), - "DKG tests, %d, %d bits" % (fi, nb), - {}) - if tv_file is None: - break - key = tv_file[0].k - for tv in tv_file[1:]: - tv.k = key - tvs.append(tv) - - for tv in tvs: - k, n, a, p, c = tv.k, tv.n, tv.a, tv.p, tv.c - mac_len = len(c) - len(p) - cipher = AES.new(k, AES.MODE_OCB, nonce=n, mac_len=mac_len) - cipher.update(a) - c_out, tag_out = cipher.encrypt_and_digest(p) - self.assertEqual(c, c_out + tag_out) - - def test_3(self): - - def check(keylen, taglen, noncelen, exp): - result = algo_rfc7253(keylen, taglen, noncelen) - self.assertEqual(result, unhexlify(exp)) - - # test-vector-3-nonce104.txt - check(128, 128, 104, "C47F5F0341E15326D4D1C46F47F05062") - check(192, 128, 104, "95B9167A38EB80495DFC561A8486E109") - check(256, 128, 104, "AFE1CDDB97028FD92F8FB3C8CFBA7D83") - check(128, 96, 104, "F471B4983BA80946DF217A54") - check(192, 96, 104, "5AE828BC51C24D85FA5CC7B2") - check(256, 96, 104, "8C8335982E2B734616CAD14C") - check(128, 64, 104, "B553F74B85FD1E5B") - check(192, 64, 104, "3B49D20E513531F9") - check(256, 64, 104, "ED6DA5B1216BF8BB") - - # test-vector-3-nonce112.txt - check(128, 128, 112, "CA8AFCA031BAC3F480A583BD6C50A547") - check(192, 128, 112, "D170C1DF356308079DA9A3F619147148") - check(256, 128, 112, "57F94381F2F9231EFB04AECD323757C3") - check(128, 96, 112, "3A618B2531ED39F260C750DC") - check(192, 96, 112, "9071EB89FEDBADDA88FD286E") - check(256, 96, 112, "FDF0EFB97F21A39AC4BAB5AC") - check(128, 64, 112, "FAB2FF3A8DD82A13") - check(192, 64, 112, "AC01D912BD0737D3") - check(256, 64, 112, "9D1FD0B500EA4ECF") - - # test-vector-3-nonce120.txt - check(128, 128, 120, "9E043A7140A25FB91F43BCC9DD7E0F46") - check(192, 128, 120, "680000E53908323A7F396B955B8EC641") - check(256, 128, 120, "8304B97FAACDA56E676602E1878A7E6F") - check(128, 96, 120, "81F978AC9867E825D339847D") - check(192, 96, 120, "EFCF2D60B24926ADA48CF5B1") - check(256, 96, 120, "84961DC56E917B165E58C174") - check(128, 64, 120, "227AEE6C9D905A61") - check(192, 64, 120, "541DE691B9E1A2F9") - check(256, 64, 120, "B0E761381C7129FC") - - def test_2_bugfix(self): - nonce = unhexlify("EEDDCCBBAA9988776655443322110D") - key = unhexlify("0F0E0D0C0B0A09080706050403020100") - A = unhexlify("000102030405060708090A0B0C0D0E0F1011121314151617" - "18191A1B1C1D1E1F2021222324252627") - P = unhexlify("000102030405060708090A0B0C0D0E0F1011121314151617" - "18191A1B1C1D1E1F2021222324252627") - C = unhexlify("07E903BFC49552411ABC865F5ECE60F6FAD1F5A9F14D3070" - "FA2F1308A563207FFE14C1EEA44B22059C7484319D8A2C53" - "C236A7B3") - mac_len = len(C) - len(P) - - # Prior to version 3.17, a nonce of maximum length (15 bytes) - # was actually used as a 14 byte nonce. The last byte was erroneously - # ignored. - buggy_result = unhexlify("BA015C4E5AE54D76C890AE81BD40DC57" - "03EDC30E8AC2A58BC5D8FA4D61C5BAE6" - "C39BEAC435B2FD56A2A5085C1B135D77" - "0C8264B7") - cipher = AES.new(key, AES.MODE_OCB, nonce=nonce[:-1], mac_len=mac_len) - cipher.update(A) - C_out2, tag_out2 = cipher.encrypt_and_digest(P) - self.assertEqual(buggy_result, C_out2 + tag_out2) - - -def get_tests(config={}): - tests = [] - tests += list_test_cases(OcbTests) - tests += list_test_cases(OcbFSMTests) - tests += list_test_cases(OcbRfc7253Test) - tests += list_test_cases(OcbDkgTest) - return tests - - -if __name__ == '__main__': - def suite(): - return unittest.TestSuite(get_tests()) - unittest.main(defaultTest='suite') diff --git a/spaces/johnslegers/stable-diffusion/setup.py b/spaces/johnslegers/stable-diffusion/setup.py deleted file mode 100644 index 7e1b0fdee19c2da2f2a77be0cf706362d322f02e..0000000000000000000000000000000000000000 --- a/spaces/johnslegers/stable-diffusion/setup.py +++ /dev/null @@ -1,15 +0,0 @@ -from setuptools import setup, find_packages -import os - -def _read_reqs(relpath): - fullpath = os.path.join(os.path.dirname(__file__), relpath) - with open(fullpath) as f: - return [s.strip() for s in f.readlines() if (s.strip() and not s.startswith("#"))] - -setup( - name='stable-diffusion', - version='0.0.1', - description='', - packages=find_packages(), - install_requires=_read_reqs("requirements.txt"), -) diff --git a/spaces/kaggle/amex/README.md b/spaces/kaggle/amex/README.md deleted file mode 100644 index bdcfb4cbebbba1ed6722f1fd50cb91466e5b84ad..0000000000000000000000000000000000000000 --- a/spaces/kaggle/amex/README.md +++ /dev/null @@ -1,101 +0,0 @@ ---- -title: AMEX -datasets: -- -tags: -- evaluate -- metric -description: >- - Metric used for the AMEX default prediction Kaggle challenge (https://www.kaggle.com/competitions/amex-default-prediction). -sdk: gradio -sdk_version: 3.0.2 -app_file: app.py -pinned: false ---- - -# Metric Card for AMEX - -## Metric Description -The AMEX metric implements the evaluation metric used in the [American Express - Default Prediction](https://www.kaggle.com/competitions/amex-default-prediction) Kaggle competition. - -The evaluation metric, *M*, for this competition is the mean of two measures of rank ordering: Normalized Gini Coefficient, *G*, and default rate captured at 4%, *D*. - -*M=0.5⋅(G+D)* - -The default rate captured at 4% is the percentage of the positive labels (defaults) captured within the highest-ranked 4% of the predictions, and represents a Sensitivity/Recall statistic. - -For both of the sub-metrics *G* and *D*, the negative labels are given a weight of 20 to adjust for downsampling. - -This metric has a maximum value of 1.0. - -## How to Use - -```python -import evaluate - -amex_metric = evaluate.load("lvwerra/amex") -amex_metric.compute(references=[0, 1], predictions=[0.01, 0.99]) ->>> {'amex_score': 0.5} -``` - -### Inputs -*List all input arguments in the format below* -- **predictions** *(List[float]): Default preditictions, should be between 0-1.* -- **references** *(List[int]): Ground truth, should be between 0 or 1.* - -### Output Values - -This metric has a maximum value of 1.0. - -### Examples - -```python -import numpy as np -import pandas as pd -from pathlib import Path -import evaluate - - -input_path = Path('/kaggle/input/amex-default-prediction/') - -#load data -train_data = pd.read_csv( - input_path / 'train_data.csv', - index_col='customer_ID', - usecols=['customer_ID', 'P_2']) - -train_labels = pd.read_csv(input_path / 'train_labels.csv', index_col='customer_ID') - -#make predictions -ave_p2 = (train_data - .groupby('customer_ID') - .mean() - .rename(columns={'P_2': 'prediction'})) - -#scale the mean P_2 by the max value and take the compliment -ave_p2['prediction'] = 1.0 - (ave_p2['prediction'] / ave_p2['prediction'].max()) - -#evaluate -amex_metric = evaluate.load("lvwerra/amex") -amex_metric.compute(references=train_labels["target"], predictions=ave_p2["prediction"]) - ->>> {'amex_score': 0.5729004324151608} -``` - -## Limitations and Bias -This metric has been designed for the AMEX default prediction competition and might not be suitable for use-cases outside this scope. - -## Citation - -```bibtex -@misc{kaggle, -title={Kaggle Competition: American Express - Default Prediction}, -year={2022}, -url={https://www.kaggle.com/competitions/amex-default-prediction}, -} -``` - -## Further References -- https://www.kaggle.com/competitions/amex-default-prediction -- https://en.wikipedia.org/wiki/Gini_coefficient -- https://en.wikipedia.org/wiki/Default_(finance) \ No newline at end of file diff --git a/spaces/kcagle/AutoGPT/autogpt/commands/twitter.py b/spaces/kcagle/AutoGPT/autogpt/commands/twitter.py deleted file mode 100644 index 3eaed36e20e1c520690ac59f25a4da6501f3440f..0000000000000000000000000000000000000000 --- a/spaces/kcagle/AutoGPT/autogpt/commands/twitter.py +++ /dev/null @@ -1,26 +0,0 @@ -import os - -import tweepy -from dotenv import load_dotenv - -load_dotenv() - - -def send_tweet(tweet_text): - consumer_key = os.environ.get("TW_CONSUMER_KEY") - consumer_secret = os.environ.get("TW_CONSUMER_SECRET") - access_token = os.environ.get("TW_ACCESS_TOKEN") - access_token_secret = os.environ.get("TW_ACCESS_TOKEN_SECRET") - # Authenticate to Twitter - auth = tweepy.OAuthHandler(consumer_key, consumer_secret) - auth.set_access_token(access_token, access_token_secret) - - # Create API object - api = tweepy.API(auth) - - # Send tweet - try: - api.update_status(tweet_text) - print("Tweet sent successfully!") - except tweepy.TweepyException as e: - print("Error sending tweet: {}".format(e.reason)) diff --git a/spaces/keremberke/football-object-detection/app.py b/spaces/keremberke/football-object-detection/app.py deleted file mode 100644 index 3b29292ec92e6219a4c8bdfb15ccb0df3c2ec646..0000000000000000000000000000000000000000 --- a/spaces/keremberke/football-object-detection/app.py +++ /dev/null @@ -1,53 +0,0 @@ - -import json -import gradio as gr -import yolov5 -from PIL import Image -from huggingface_hub import hf_hub_download - -app_title = "Football Object Detection" -models_ids = ['keremberke/yolov5n-football', 'keremberke/yolov5s-football', 'keremberke/yolov5m-football'] -article = f"

                                                                                                                                              huggingface.co/{models_ids[-1]} | huggingface.co/keremberke/football-object-detection | awesome-yolov5-models

                                                                                                                                              " - -current_model_id = models_ids[-1] -model = yolov5.load(current_model_id) - -examples = [['test_images/18_pp_jpg.rf.912a54e24d38371daf61114b9a6b18be.jpg', 0.25, 'keremberke/yolov5m-football'], ['test_images/54881_jpg.rf.62b337bc47dbf6fbf5a34e18a361de97.jpg', 0.25, 'keremberke/yolov5m-football'], ['test_images/55219_jpg.rf.cdfe02a50951cf1ad449e940fbb646ac.jpg', 0.25, 'keremberke/yolov5m-football']] - - -def predict(image, threshold=0.25, model_id=None): - # update model if required - global current_model_id - global model - if model_id != current_model_id: - model = yolov5.load(model_id) - current_model_id = model_id - - # get model input size - config_path = hf_hub_download(repo_id=model_id, filename="config.json") - with open(config_path, "r") as f: - config = json.load(f) - input_size = config["input_size"] - - # perform inference - model.conf = threshold - results = model(image, size=input_size) - numpy_image = results.render()[0] - output_image = Image.fromarray(numpy_image) - return output_image - - -gr.Interface( - title=app_title, - description="Created by 'keremberke'", - article=article, - fn=predict, - inputs=[ - gr.Image(type="pil"), - gr.Slider(maximum=1, step=0.01, value=0.25), - gr.Dropdown(models_ids, value=models_ids[-1]), - ], - outputs=gr.Image(type="pil"), - examples=examples, - cache_examples=True if examples else False, -).launch(enable_queue=True) diff --git a/spaces/kevin-dw/runwayml-stable-diffusion-v1-5/README.md b/spaces/kevin-dw/runwayml-stable-diffusion-v1-5/README.md deleted file mode 100644 index 682172f92e557e3c998c32a46b5e5b1367ec26b9..0000000000000000000000000000000000000000 --- a/spaces/kevin-dw/runwayml-stable-diffusion-v1-5/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Runwayml Stable Diffusion V1 5 -emoji: 🔥 -colorFrom: yellow -colorTo: green -sdk: gradio -sdk_version: 3.50.2 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/kevinwang676/Bark-Voice-Cloning/bark/generation.py b/spaces/kevinwang676/Bark-Voice-Cloning/bark/generation.py deleted file mode 100644 index ad474d770235c7b665218e64699fb0b0b1b8cc3f..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/Bark-Voice-Cloning/bark/generation.py +++ /dev/null @@ -1,864 +0,0 @@ -import contextlib -import gc -import os -import re -import requests -import gc -import sys - -from encodec import EncodecModel -import funcy -import logging -import numpy as np -from scipy.special import softmax -import torch -import torch.nn.functional as F -import tqdm -from transformers import BertTokenizer -from huggingface_hub import hf_hub_download, hf_hub_url - -from .model import GPTConfig, GPT -from .model_fine import FineGPT, FineGPTConfig -from .settings import initenv - -initenv(sys.argv) -global_force_cpu = os.environ.get("BARK_FORCE_CPU", False) -if ( - global_force_cpu != True and - torch.cuda.is_available() and - hasattr(torch.cuda, "amp") and - hasattr(torch.cuda.amp, "autocast") and - hasattr(torch.cuda, "is_bf16_supported") and - torch.cuda.is_bf16_supported() -): - autocast = funcy.partial(torch.cuda.amp.autocast, dtype=torch.bfloat16) -else: - @contextlib.contextmanager - def autocast(): - yield - - -# hold models in global scope to lazy load -global models -models = {} - -global models_devices -models_devices = {} - - -CONTEXT_WINDOW_SIZE = 1024 - -SEMANTIC_RATE_HZ = 49.9 -SEMANTIC_VOCAB_SIZE = 10_000 - -CODEBOOK_SIZE = 1024 -N_COARSE_CODEBOOKS = 2 -N_FINE_CODEBOOKS = 8 -COARSE_RATE_HZ = 75 - -SAMPLE_RATE = 24_000 - - -SUPPORTED_LANGS = [ - ("English", "en"), - ("German", "de"), - ("Spanish", "es"), - ("French", "fr"), - ("Hindi", "hi"), - ("Italian", "it"), - ("Japanese", "ja"), - ("Korean", "ko"), - ("Polish", "pl"), - ("Portuguese", "pt"), - ("Russian", "ru"), - ("Turkish", "tr"), - ("Chinese", "zh"), -] - -ALLOWED_PROMPTS = {"announcer"} -for _, lang in SUPPORTED_LANGS: - for prefix in ("", f"v2{os.path.sep}"): - for n in range(10): - ALLOWED_PROMPTS.add(f"{prefix}{lang}_speaker_{n}") - - -logger = logging.getLogger(__name__) - - -CUR_PATH = os.path.dirname(os.path.abspath(__file__)) - - -#default_cache_dir = os.path.join(os.path.expanduser("~"), ".cache") -#CACHE_DIR = os.path.join(os.getenv("XDG_CACHE_HOME", default_cache_dir), "suno", "bark_v0") -#CACHE_DIR = os.path.join(os.getcwd(), "models" -CACHE_DIR = "./models" - - -def _cast_bool_env_var(s): - return s.lower() in ('true', '1', 't') - -USE_SMALL_MODELS = _cast_bool_env_var(os.environ.get("SUNO_USE_SMALL_MODELS", "False")) -GLOBAL_ENABLE_MPS = _cast_bool_env_var(os.environ.get("SUNO_ENABLE_MPS", "False")) -OFFLOAD_CPU = _cast_bool_env_var(os.environ.get("SUNO_OFFLOAD_CPU", "False")) - -REMOTE_MODEL_PATHS = { - "text_small": { - "repo_id": "suno/bark", - "file_name": "text.pt", - }, - "coarse_small": { - "repo_id": "suno/bark", - "file_name": "coarse.pt", - }, - "fine_small": { - "repo_id": "suno/bark", - "file_name": "fine.pt", - }, - "text": { - "repo_id": "suno/bark", - "file_name": "text_2.pt", - }, - "coarse": { - "repo_id": "suno/bark", - "file_name": "coarse_2.pt", - }, - "fine": { - "repo_id": "suno/bark", - "file_name": "fine_2.pt", - }, -} - - -if not hasattr(torch.nn.functional, 'scaled_dot_product_attention') and torch.cuda.is_available(): - logger.warning( - "torch version does not support flash attention. You will get faster" + - " inference speed by upgrade torch to newest nightly version." - ) - - -def grab_best_device(use_gpu=True): - if torch.cuda.device_count() > 0 and use_gpu: - device = "cuda" - elif torch.backends.mps.is_available() and use_gpu and GLOBAL_ENABLE_MPS: - device = "mps" - else: - device = "cpu" - return device - - -def _get_ckpt_path(model_type, use_small=False): - key = model_type - if use_small or USE_SMALL_MODELS: - key += "_small" - return os.path.join(CACHE_DIR, REMOTE_MODEL_PATHS[key]["file_name"]) - -""" -def _download(from_hf_path, file_name, destfilename): - os.makedirs(CACHE_DIR, exist_ok=True) - hf_hub_download(repo_id=from_hf_path, filename=file_name, local_dir=CACHE_DIR, local_dir_use_symlinks=False) - # Bug in original repo? Downloaded name differs from expected... - if not os.path.exists(destfilename): - localname = os.path.join(CACHE_DIR, file_name) - os.rename(localname, destfilename) -""" -def _download(from_hf_path, file_name): - os.makedirs(CACHE_DIR, exist_ok=True) - hf_hub_download(repo_id=from_hf_path, filename=file_name, local_dir=CACHE_DIR) - - -class InferenceContext: - def __init__(self, benchmark=False): - # we can't expect inputs to be the same length, so disable benchmarking by default - self._chosen_cudnn_benchmark = benchmark - self._cudnn_benchmark = None - - def __enter__(self): - self._cudnn_benchmark = torch.backends.cudnn.benchmark - torch.backends.cudnn.benchmark = self._chosen_cudnn_benchmark - - def __exit__(self, exc_type, exc_value, exc_traceback): - torch.backends.cudnn.benchmark = self._cudnn_benchmark - - -if torch.cuda.is_available(): - torch.backends.cuda.matmul.allow_tf32 = True - torch.backends.cudnn.allow_tf32 = True - - -@contextlib.contextmanager -def _inference_mode(): - with InferenceContext(), torch.inference_mode(), torch.no_grad(), autocast(): - yield - - -def _clear_cuda_cache(): - if torch.cuda.is_available(): - torch.cuda.empty_cache() - torch.cuda.synchronize() - - -def clean_models(model_key=None): - global models - model_keys = [model_key] if model_key is not None else models.keys() - for k in model_keys: - if k in models: - del models[k] - _clear_cuda_cache() - gc.collect() - - -def _load_model(ckpt_path, device, use_small=False, model_type="text"): - if model_type == "text": - ConfigClass = GPTConfig - ModelClass = GPT - elif model_type == "coarse": - ConfigClass = GPTConfig - ModelClass = GPT - elif model_type == "fine": - ConfigClass = FineGPTConfig - ModelClass = FineGPT - else: - raise NotImplementedError() - - # Force-remove Models to allow running on >12Gb GPU - # CF: Probably not needed anymore - #global models - #models.clear() - #gc.collect() - #torch.cuda.empty_cache() - # to here... - - model_key = f"{model_type}_small" if use_small or USE_SMALL_MODELS else model_type - model_info = REMOTE_MODEL_PATHS[model_key] - if not os.path.exists(ckpt_path): - logger.info(f"{model_type} model not found, downloading into `{CACHE_DIR}`.") - ## added next two lines to make it super clear which model is being downloaded - remote_filename = hf_hub_url(model_info["repo_id"], model_info["file_name"]) - print(f"Downloading {model_key} {model_info['repo_id']} remote model file {remote_filename} {model_info['file_name']} to {CACHE_DIR}") - _download(model_info["repo_id"], model_info["file_name"]) - # add next line to make it super clear which model is being loaded - print(f"Loading {model_key} model from {ckpt_path} to {device}") # added - checkpoint = torch.load(ckpt_path, map_location=device) - # this is a hack - model_args = checkpoint["model_args"] - if "input_vocab_size" not in model_args: - model_args["input_vocab_size"] = model_args["vocab_size"] - model_args["output_vocab_size"] = model_args["vocab_size"] - del model_args["vocab_size"] - gptconf = ConfigClass(**checkpoint["model_args"]) - model = ModelClass(gptconf) - state_dict = checkpoint["model"] - # fixup checkpoint - unwanted_prefix = "_orig_mod." - for k, v in list(state_dict.items()): - if k.startswith(unwanted_prefix): - state_dict[k[len(unwanted_prefix) :]] = state_dict.pop(k) - extra_keys = set(state_dict.keys()) - set(model.state_dict().keys()) - extra_keys = set([k for k in extra_keys if not k.endswith(".attn.bias")]) - missing_keys = set(model.state_dict().keys()) - set(state_dict.keys()) - missing_keys = set([k for k in missing_keys if not k.endswith(".attn.bias")]) - if len(extra_keys) != 0: - raise ValueError(f"extra keys found: {extra_keys}") - if len(missing_keys) != 0: - raise ValueError(f"missing keys: {missing_keys}") - model.load_state_dict(state_dict, strict=False) - n_params = model.get_num_params() - val_loss = checkpoint["best_val_loss"].item() - logger.info(f"model loaded: {round(n_params/1e6,1)}M params, {round(val_loss,3)} loss") - model.eval() - model.to(device) - del checkpoint, state_dict - _clear_cuda_cache() - if model_type == "text": - tokenizer = BertTokenizer.from_pretrained("bert-base-multilingual-cased") - return { - "model": model, - "tokenizer": tokenizer, - } - return model - - -def _load_codec_model(device): - model = EncodecModel.encodec_model_24khz() - model.set_target_bandwidth(6.0) - model.eval() - model.to(device) - _clear_cuda_cache() - return model - - -def load_model(use_gpu=True, use_small=False, force_reload=False, model_type="text"): - _load_model_f = funcy.partial(_load_model, model_type=model_type, use_small=use_small) - if model_type not in ("text", "coarse", "fine"): - raise NotImplementedError() - global models - global models_devices - device = grab_best_device(use_gpu=use_gpu) - model_key = f"{model_type}" - if OFFLOAD_CPU: - models_devices[model_key] = device - device = "cpu" - if model_key not in models or force_reload: - ckpt_path = _get_ckpt_path(model_type, use_small=use_small) - clean_models(model_key=model_key) - model = _load_model_f(ckpt_path, device) - models[model_key] = model - if model_type == "text": - models[model_key]["model"].to(device) - else: - models[model_key].to(device) - return models[model_key] - - -def load_codec_model(use_gpu=True, force_reload=False): - global models - global models_devices - device = grab_best_device(use_gpu=use_gpu) - if device == "mps": - # encodec doesn't support mps - device = "cpu" - model_key = "codec" - if OFFLOAD_CPU: - models_devices[model_key] = device - device = "cpu" - if model_key not in models or force_reload: - clean_models(model_key=model_key) - model = _load_codec_model(device) - models[model_key] = model - models[model_key].to(device) - return models[model_key] - - -def preload_models( - text_use_gpu=True, - text_use_small=False, - coarse_use_gpu=True, - coarse_use_small=False, - fine_use_gpu=True, - fine_use_small=False, - codec_use_gpu=True, - force_reload=False -): - """Load all the necessary models for the pipeline.""" - if grab_best_device() == "cpu" and ( - text_use_gpu or coarse_use_gpu or fine_use_gpu or codec_use_gpu - ): - logger.warning("No GPU being used. Careful, inference might be very slow!") - _ = load_model( - model_type="text", use_gpu=text_use_gpu, use_small=text_use_small, force_reload=force_reload - ) - _ = load_model( - model_type="coarse", - use_gpu=coarse_use_gpu, - use_small=coarse_use_small, - force_reload=force_reload, - ) - _ = load_model( - model_type="fine", use_gpu=fine_use_gpu, use_small=fine_use_small, force_reload=force_reload - ) - _ = load_codec_model(use_gpu=codec_use_gpu, force_reload=force_reload) - - -#### -# Generation Functionality -#### - - -def _tokenize(tokenizer, text): - return tokenizer.encode(text, add_special_tokens=False) - - -def _detokenize(tokenizer, enc_text): - return tokenizer.decode(enc_text) - - -def _normalize_whitespace(text): - return re.sub(r"\s+", " ", text).strip() - - -TEXT_ENCODING_OFFSET = 10_048 -SEMANTIC_PAD_TOKEN = 10_000 -TEXT_PAD_TOKEN = 129_595 -SEMANTIC_INFER_TOKEN = 129_599 - - -def _load_history_prompt(history_prompt_input): - if isinstance(history_prompt_input, str) and history_prompt_input.endswith(".npz"): - history_prompt = np.load(history_prompt_input) - elif isinstance(history_prompt_input, str): - # make sure this works on non-ubuntu - history_prompt_input = os.path.join(*history_prompt_input.split("/")) -# if history_prompt_input not in ALLOWED_PROMPTS: -# raise ValueError("history prompt not found") - history_prompt = np.load( - os.path.join(CUR_PATH, "assets", "prompts", f"{history_prompt_input}.npz") - ) - elif isinstance(history_prompt_input, dict): - assert("semantic_prompt" in history_prompt_input) - assert("coarse_prompt" in history_prompt_input) - assert("fine_prompt" in history_prompt_input) - history_prompt = history_prompt_input - else: - raise ValueError("history prompt format unrecognized") - return history_prompt - - -def generate_text_semantic( - text, - history_prompt=None, - temp=0.7, - top_k=None, - top_p=None, - silent=False, - min_eos_p=0.2, - max_gen_duration_s=None, - allow_early_stop=True, - use_kv_caching=False, -): - """Generate semantic tokens from text.""" - assert isinstance(text, str) - text = _normalize_whitespace(text) - assert len(text.strip()) > 0 - if history_prompt is not None: - history_prompt = _load_history_prompt(history_prompt) - semantic_history = history_prompt["semantic_prompt"] - assert ( - isinstance(semantic_history, np.ndarray) - and len(semantic_history.shape) == 1 - and len(semantic_history) > 0 - and semantic_history.min() >= 0 - and semantic_history.max() <= SEMANTIC_VOCAB_SIZE - 1 - ) - else: - semantic_history = None - # load models if not yet exist - global models - global models_devices - if "text" not in models: - preload_models() - model_container = models["text"] - model = model_container["model"] - tokenizer = model_container["tokenizer"] - encoded_text = np.array(_tokenize(tokenizer, text)) + TEXT_ENCODING_OFFSET - if OFFLOAD_CPU: - model.to(models_devices["text"]) - device = next(model.parameters()).device - if len(encoded_text) > 256: - p = round((len(encoded_text) - 256) / len(encoded_text) * 100, 1) - logger.warning(f"warning, text too long, lopping of last {p}%") - encoded_text = encoded_text[:256] - encoded_text = np.pad( - encoded_text, - (0, 256 - len(encoded_text)), - constant_values=TEXT_PAD_TOKEN, - mode="constant", - ) - if semantic_history is not None: - semantic_history = semantic_history.astype(np.int64) - # lop off if history is too long, pad if needed - semantic_history = semantic_history[-256:] - semantic_history = np.pad( - semantic_history, - (0, 256 - len(semantic_history)), - constant_values=SEMANTIC_PAD_TOKEN, - mode="constant", - ) - else: - semantic_history = np.array([SEMANTIC_PAD_TOKEN] * 256) - x = torch.from_numpy( - np.hstack([ - encoded_text, semantic_history, np.array([SEMANTIC_INFER_TOKEN]) - ]).astype(np.int64) - )[None] - assert x.shape[1] == 256 + 256 + 1 - with _inference_mode(): - x = x.to(device) - n_tot_steps = 768 - # custom tqdm updates since we don't know when eos will occur - pbar = tqdm.tqdm(disable=silent, total=100) - pbar_state = 0 - tot_generated_duration_s = 0 - kv_cache = None - for n in range(n_tot_steps): - if use_kv_caching and kv_cache is not None: - x_input = x[:, [-1]] - else: - x_input = x - logits, kv_cache = model( - x_input, merge_context=True, use_cache=use_kv_caching, past_kv=kv_cache - ) - relevant_logits = logits[0, 0, :SEMANTIC_VOCAB_SIZE] - if allow_early_stop: - relevant_logits = torch.hstack( - (relevant_logits, logits[0, 0, [SEMANTIC_PAD_TOKEN]]) # eos - ) - if top_p is not None: - # faster to convert to numpy - original_device = relevant_logits.device - relevant_logits = relevant_logits.detach().cpu().type(torch.float32).numpy() - sorted_indices = np.argsort(relevant_logits)[::-1] - sorted_logits = relevant_logits[sorted_indices] - cumulative_probs = np.cumsum(softmax(sorted_logits)) - sorted_indices_to_remove = cumulative_probs > top_p - sorted_indices_to_remove[1:] = sorted_indices_to_remove[:-1].copy() - sorted_indices_to_remove[0] = False - relevant_logits[sorted_indices[sorted_indices_to_remove]] = -np.inf - relevant_logits = torch.from_numpy(relevant_logits) - relevant_logits = relevant_logits.to(original_device) - if top_k is not None: - v, _ = torch.topk(relevant_logits, min(top_k, relevant_logits.size(-1))) - relevant_logits[relevant_logits < v[-1]] = -float("Inf") - probs = F.softmax(relevant_logits / temp, dim=-1) - # multinomial bugged on mps: shuttle to cpu if necessary - inf_device = probs.device - if probs.device.type == "mps": - probs = probs.to("cpu") - item_next = torch.multinomial(probs, num_samples=1) - probs = probs.to(inf_device) - item_next = item_next.to(inf_device) - if allow_early_stop and ( - item_next == SEMANTIC_VOCAB_SIZE - or (min_eos_p is not None and probs[-1] >= min_eos_p) - ): - # eos found, so break - pbar.update(100 - pbar_state) - break - x = torch.cat((x, item_next[None]), dim=1) - tot_generated_duration_s += 1 / SEMANTIC_RATE_HZ - if max_gen_duration_s is not None and tot_generated_duration_s > max_gen_duration_s: - pbar.update(100 - pbar_state) - break - if n == n_tot_steps - 1: - pbar.update(100 - pbar_state) - break - del logits, relevant_logits, probs, item_next - req_pbar_state = np.min([100, int(round(100 * n / n_tot_steps))]) - if req_pbar_state > pbar_state: - pbar.update(req_pbar_state - pbar_state) - pbar_state = req_pbar_state - pbar.close() - out = x.detach().cpu().numpy().squeeze()[256 + 256 + 1 :] - if OFFLOAD_CPU: - model.to("cpu") - assert all(0 <= out) and all(out < SEMANTIC_VOCAB_SIZE) - _clear_cuda_cache() - return out - - -def _flatten_codebooks(arr, offset_size=CODEBOOK_SIZE): - assert len(arr.shape) == 2 - arr = arr.copy() - if offset_size is not None: - for n in range(1, arr.shape[0]): - arr[n, :] += offset_size * n - flat_arr = arr.ravel("F") - return flat_arr - - -COARSE_SEMANTIC_PAD_TOKEN = 12_048 -COARSE_INFER_TOKEN = 12_050 - - -def generate_coarse( - x_semantic, - history_prompt=None, - temp=0.7, - top_k=None, - top_p=None, - silent=False, - max_coarse_history=630, # min 60 (faster), max 630 (more context) - sliding_window_len=60, - use_kv_caching=False, -): - """Generate coarse audio codes from semantic tokens.""" -# CF: Uncommented because it breaks swap voice more than once -# assert ( -# isinstance(x_semantic, np.ndarray) -# and len(x_semantic.shape) == 1 -# and len(x_semantic) > 0 -# and x_semantic.min() >= 0 -# and x_semantic.max() <= SEMANTIC_VOCAB_SIZE - 1 -# ) - assert 60 <= max_coarse_history <= 630 - assert max_coarse_history + sliding_window_len <= 1024 - 256 - semantic_to_coarse_ratio = COARSE_RATE_HZ / SEMANTIC_RATE_HZ * N_COARSE_CODEBOOKS - max_semantic_history = int(np.floor(max_coarse_history / semantic_to_coarse_ratio)) - if history_prompt is not None: - history_prompt = _load_history_prompt(history_prompt) - x_semantic_history = history_prompt["semantic_prompt"] - x_coarse_history = history_prompt["coarse_prompt"] - assert ( - isinstance(x_semantic_history, np.ndarray) - and len(x_semantic_history.shape) == 1 - and len(x_semantic_history) > 0 - and x_semantic_history.min() >= 0 - and x_semantic_history.max() <= SEMANTIC_VOCAB_SIZE - 1 - and isinstance(x_coarse_history, np.ndarray) - and len(x_coarse_history.shape) == 2 - and x_coarse_history.shape[0] == N_COARSE_CODEBOOKS - and x_coarse_history.shape[-1] >= 0 - and x_coarse_history.min() >= 0 - and x_coarse_history.max() <= CODEBOOK_SIZE - 1 - #and ( - # round(x_coarse_history.shape[-1] / len(x_semantic_history), 1) - # == round(semantic_to_coarse_ratio / N_COARSE_CODEBOOKS, 1) - #) - ) - x_coarse_history = _flatten_codebooks(x_coarse_history) + SEMANTIC_VOCAB_SIZE - # trim histories correctly - n_semantic_hist_provided = np.min( - [ - max_semantic_history, - len(x_semantic_history) - len(x_semantic_history) % 2, - int(np.floor(len(x_coarse_history) / semantic_to_coarse_ratio)), - ] - ) - n_coarse_hist_provided = int(round(n_semantic_hist_provided * semantic_to_coarse_ratio)) - x_semantic_history = x_semantic_history[-n_semantic_hist_provided:].astype(np.int32) - x_coarse_history = x_coarse_history[-n_coarse_hist_provided:].astype(np.int32) - # TODO: bit of a hack for time alignment (sounds better) - x_coarse_history = x_coarse_history[:-2] - else: - x_semantic_history = np.array([], dtype=np.int32) - x_coarse_history = np.array([], dtype=np.int32) - # load models if not yet exist - global models - global models_devices - if "coarse" not in models: - preload_models() - model = models["coarse"] - if OFFLOAD_CPU: - model.to(models_devices["coarse"]) - device = next(model.parameters()).device - # start loop - n_steps = int( - round( - np.floor(len(x_semantic) * semantic_to_coarse_ratio / N_COARSE_CODEBOOKS) - * N_COARSE_CODEBOOKS - ) - ) - assert n_steps > 0 and n_steps % N_COARSE_CODEBOOKS == 0 - x_semantic = np.hstack([x_semantic_history, x_semantic]).astype(np.int32) - x_coarse = x_coarse_history.astype(np.int32) - base_semantic_idx = len(x_semantic_history) - with _inference_mode(): - x_semantic_in = torch.from_numpy(x_semantic)[None].to(device) - x_coarse_in = torch.from_numpy(x_coarse)[None].to(device) - n_window_steps = int(np.ceil(n_steps / sliding_window_len)) - n_step = 0 - for _ in tqdm.tqdm(range(n_window_steps), total=n_window_steps, disable=silent): - semantic_idx = base_semantic_idx + int(round(n_step / semantic_to_coarse_ratio)) - # pad from right side - x_in = x_semantic_in[:, np.max([0, semantic_idx - max_semantic_history]) :] - x_in = x_in[:, :256] - x_in = F.pad( - x_in, - (0, 256 - x_in.shape[-1]), - "constant", - COARSE_SEMANTIC_PAD_TOKEN, - ) - x_in = torch.hstack( - [ - x_in, - torch.tensor([COARSE_INFER_TOKEN])[None].to(device), - x_coarse_in[:, -max_coarse_history:], - ] - ) - kv_cache = None - for _ in range(sliding_window_len): - if n_step >= n_steps: - continue - is_major_step = n_step % N_COARSE_CODEBOOKS == 0 - - if use_kv_caching and kv_cache is not None: - x_input = x_in[:, [-1]] - else: - x_input = x_in - - logits, kv_cache = model(x_input, use_cache=use_kv_caching, past_kv=kv_cache) - logit_start_idx = ( - SEMANTIC_VOCAB_SIZE + (1 - int(is_major_step)) * CODEBOOK_SIZE - ) - logit_end_idx = ( - SEMANTIC_VOCAB_SIZE + (2 - int(is_major_step)) * CODEBOOK_SIZE - ) - relevant_logits = logits[0, 0, logit_start_idx:logit_end_idx] - if top_p is not None: - # faster to convert to numpy - original_device = relevant_logits.device - relevant_logits = relevant_logits.detach().cpu().type(torch.float32).numpy() - sorted_indices = np.argsort(relevant_logits)[::-1] - sorted_logits = relevant_logits[sorted_indices] - cumulative_probs = np.cumsum(softmax(sorted_logits)) - sorted_indices_to_remove = cumulative_probs > top_p - sorted_indices_to_remove[1:] = sorted_indices_to_remove[:-1].copy() - sorted_indices_to_remove[0] = False - relevant_logits[sorted_indices[sorted_indices_to_remove]] = -np.inf - relevant_logits = torch.from_numpy(relevant_logits) - relevant_logits = relevant_logits.to(original_device) - if top_k is not None: - v, _ = torch.topk(relevant_logits, min(top_k, relevant_logits.size(-1))) - relevant_logits[relevant_logits < v[-1]] = -float("Inf") - probs = F.softmax(relevant_logits / temp, dim=-1) - # multinomial bugged on mps: shuttle to cpu if necessary - inf_device = probs.device - if probs.device.type == "mps": - probs = probs.to("cpu") - item_next = torch.multinomial(probs, num_samples=1) - probs = probs.to(inf_device) - item_next = item_next.to(inf_device) - item_next += logit_start_idx - x_coarse_in = torch.cat((x_coarse_in, item_next[None]), dim=1) - x_in = torch.cat((x_in, item_next[None]), dim=1) - del logits, relevant_logits, probs, item_next - n_step += 1 - del x_in - del x_semantic_in - if OFFLOAD_CPU: - model.to("cpu") - gen_coarse_arr = x_coarse_in.detach().cpu().numpy().squeeze()[len(x_coarse_history) :] - del x_coarse_in - assert len(gen_coarse_arr) == n_steps - gen_coarse_audio_arr = gen_coarse_arr.reshape(-1, N_COARSE_CODEBOOKS).T - SEMANTIC_VOCAB_SIZE - for n in range(1, N_COARSE_CODEBOOKS): - gen_coarse_audio_arr[n, :] -= n * CODEBOOK_SIZE - _clear_cuda_cache() - return gen_coarse_audio_arr - - -def generate_fine( - x_coarse_gen, - history_prompt=None, - temp=0.5, - silent=True, -): - """Generate full audio codes from coarse audio codes.""" - assert ( - isinstance(x_coarse_gen, np.ndarray) - and len(x_coarse_gen.shape) == 2 - and 1 <= x_coarse_gen.shape[0] <= N_FINE_CODEBOOKS - 1 - and x_coarse_gen.shape[1] > 0 - and x_coarse_gen.min() >= 0 - and x_coarse_gen.max() <= CODEBOOK_SIZE - 1 - ) - if history_prompt is not None: - history_prompt = _load_history_prompt(history_prompt) - x_fine_history = history_prompt["fine_prompt"] - assert ( - isinstance(x_fine_history, np.ndarray) - and len(x_fine_history.shape) == 2 - and x_fine_history.shape[0] == N_FINE_CODEBOOKS - and x_fine_history.shape[1] >= 0 - and x_fine_history.min() >= 0 - and x_fine_history.max() <= CODEBOOK_SIZE - 1 - ) - else: - x_fine_history = None - n_coarse = x_coarse_gen.shape[0] - # load models if not yet exist - global models - global models_devices - if "fine" not in models: - preload_models() - model = models["fine"] - if OFFLOAD_CPU: - model.to(models_devices["fine"]) - device = next(model.parameters()).device - # make input arr - in_arr = np.vstack( - [ - x_coarse_gen, - np.zeros((N_FINE_CODEBOOKS - n_coarse, x_coarse_gen.shape[1])) - + CODEBOOK_SIZE, # padding - ] - ).astype(np.int32) - # prepend history if available (max 512) - if x_fine_history is not None: - x_fine_history = x_fine_history.astype(np.int32) - in_arr = np.hstack( - [ - x_fine_history[:, -512:].astype(np.int32), - in_arr, - ] - ) - n_history = x_fine_history[:, -512:].shape[1] - else: - n_history = 0 - n_remove_from_end = 0 - # need to pad if too short (since non-causal model) - if in_arr.shape[1] < 1024: - n_remove_from_end = 1024 - in_arr.shape[1] - in_arr = np.hstack( - [ - in_arr, - np.zeros((N_FINE_CODEBOOKS, n_remove_from_end), dtype=np.int32) + CODEBOOK_SIZE, - ] - ) - # we can be lazy about fractional loop and just keep overwriting codebooks - n_loops = np.max([0, int(np.ceil((x_coarse_gen.shape[1] - (1024 - n_history)) / 512))]) + 1 - with _inference_mode(): - in_arr = torch.tensor(in_arr.T).to(device) - for n in tqdm.tqdm(range(n_loops), disable=silent): - start_idx = np.min([n * 512, in_arr.shape[0] - 1024]) - start_fill_idx = np.min([n_history + n * 512, in_arr.shape[0] - 512]) - rel_start_fill_idx = start_fill_idx - start_idx - in_buffer = in_arr[start_idx : start_idx + 1024, :][None] - for nn in range(n_coarse, N_FINE_CODEBOOKS): - logits = model(nn, in_buffer) - if temp is None: - relevant_logits = logits[0, rel_start_fill_idx:, :CODEBOOK_SIZE] - codebook_preds = torch.argmax(relevant_logits, -1) - else: - relevant_logits = logits[0, :, :CODEBOOK_SIZE] / temp - probs = F.softmax(relevant_logits, dim=-1) - # multinomial bugged on mps: shuttle to cpu if necessary - inf_device = probs.device - if probs.device.type == "mps": - probs = probs.to("cpu") - codebook_preds = torch.hstack( - [ - torch.multinomial(probs[nnn], num_samples=1).to(inf_device) - for nnn in range(rel_start_fill_idx, 1024) - ] - ) - in_buffer[0, rel_start_fill_idx:, nn] = codebook_preds - del logits, codebook_preds - # transfer over info into model_in and convert to numpy - for nn in range(n_coarse, N_FINE_CODEBOOKS): - in_arr[ - start_fill_idx : start_fill_idx + (1024 - rel_start_fill_idx), nn - ] = in_buffer[0, rel_start_fill_idx:, nn] - del in_buffer - gen_fine_arr = in_arr.detach().cpu().numpy().squeeze().T - del in_arr - if OFFLOAD_CPU: - model.to("cpu") - gen_fine_arr = gen_fine_arr[:, n_history:] - if n_remove_from_end > 0: - gen_fine_arr = gen_fine_arr[:, :-n_remove_from_end] - assert gen_fine_arr.shape[-1] == x_coarse_gen.shape[-1] - _clear_cuda_cache() - return gen_fine_arr - - -def codec_decode(fine_tokens): - """Turn quantized audio codes into audio array using encodec.""" - # load models if not yet exist - global models - global models_devices - if "codec" not in models: - preload_models() - model = models["codec"] - if OFFLOAD_CPU: - model.to(models_devices["codec"]) - device = next(model.parameters()).device - arr = torch.from_numpy(fine_tokens)[None] - arr = arr.to(device) - arr = arr.transpose(0, 1) - emb = model.quantizer.decode(arr) - out = model.decoder(emb) - audio_arr = out.detach().cpu().numpy().squeeze() - del arr, emb, out - if OFFLOAD_CPU: - model.to("cpu") - return audio_arr diff --git a/spaces/kevinwang676/FreeVC/speaker_encoder/inference.py b/spaces/kevinwang676/FreeVC/speaker_encoder/inference.py deleted file mode 100644 index 15e6bf16ba9e551473cd6b179bb518f0704ac33d..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/FreeVC/speaker_encoder/inference.py +++ /dev/null @@ -1,177 +0,0 @@ -from speaker_encoder.params_data import * -from speaker_encoder.model import SpeakerEncoder -from speaker_encoder.audio import preprocess_wav # We want to expose this function from here -from matplotlib import cm -from speaker_encoder import audio -from pathlib import Path -import matplotlib.pyplot as plt -import numpy as np -import torch - -_model = None # type: SpeakerEncoder -_device = None # type: torch.device - - -def load_model(weights_fpath: Path, device=None): - """ - Loads the model in memory. If this function is not explicitely called, it will be run on the - first call to embed_frames() with the default weights file. - - :param weights_fpath: the path to saved model weights. - :param device: either a torch device or the name of a torch device (e.g. "cpu", "cuda"). The - model will be loaded and will run on this device. Outputs will however always be on the cpu. - If None, will default to your GPU if it"s available, otherwise your CPU. - """ - # TODO: I think the slow loading of the encoder might have something to do with the device it - # was saved on. Worth investigating. - global _model, _device - if device is None: - _device = torch.device("cuda" if torch.cuda.is_available() else "cpu") - elif isinstance(device, str): - _device = torch.device(device) - _model = SpeakerEncoder(_device, torch.device("cpu")) - checkpoint = torch.load(weights_fpath) - _model.load_state_dict(checkpoint["model_state"]) - _model.eval() - print("Loaded encoder \"%s\" trained to step %d" % (weights_fpath.name, checkpoint["step"])) - - -def is_loaded(): - return _model is not None - - -def embed_frames_batch(frames_batch): - """ - Computes embeddings for a batch of mel spectrogram. - - :param frames_batch: a batch mel of spectrogram as a numpy array of float32 of shape - (batch_size, n_frames, n_channels) - :return: the embeddings as a numpy array of float32 of shape (batch_size, model_embedding_size) - """ - if _model is None: - raise Exception("Model was not loaded. Call load_model() before inference.") - - frames = torch.from_numpy(frames_batch).to(_device) - embed = _model.forward(frames).detach().cpu().numpy() - return embed - - -def compute_partial_slices(n_samples, partial_utterance_n_frames=partials_n_frames, - min_pad_coverage=0.75, overlap=0.5): - """ - Computes where to split an utterance waveform and its corresponding mel spectrogram to obtain - partial utterances of each. Both the waveform and the mel - spectrogram slices are returned, so as to make each partial utterance waveform correspond to - its spectrogram. This function assumes that the mel spectrogram parameters used are those - defined in params_data.py. - - The returned ranges may be indexing further than the length of the waveform. It is - recommended that you pad the waveform with zeros up to wave_slices[-1].stop. - - :param n_samples: the number of samples in the waveform - :param partial_utterance_n_frames: the number of mel spectrogram frames in each partial - utterance - :param min_pad_coverage: when reaching the last partial utterance, it may or may not have - enough frames. If at least of are present, - then the last partial utterance will be considered, as if we padded the audio. Otherwise, - it will be discarded, as if we trimmed the audio. If there aren't enough frames for 1 partial - utterance, this parameter is ignored so that the function always returns at least 1 slice. - :param overlap: by how much the partial utterance should overlap. If set to 0, the partial - utterances are entirely disjoint. - :return: the waveform slices and mel spectrogram slices as lists of array slices. Index - respectively the waveform and the mel spectrogram with these slices to obtain the partial - utterances. - """ - assert 0 <= overlap < 1 - assert 0 < min_pad_coverage <= 1 - - samples_per_frame = int((sampling_rate * mel_window_step / 1000)) - n_frames = int(np.ceil((n_samples + 1) / samples_per_frame)) - frame_step = max(int(np.round(partial_utterance_n_frames * (1 - overlap))), 1) - - # Compute the slices - wav_slices, mel_slices = [], [] - steps = max(1, n_frames - partial_utterance_n_frames + frame_step + 1) - for i in range(0, steps, frame_step): - mel_range = np.array([i, i + partial_utterance_n_frames]) - wav_range = mel_range * samples_per_frame - mel_slices.append(slice(*mel_range)) - wav_slices.append(slice(*wav_range)) - - # Evaluate whether extra padding is warranted or not - last_wav_range = wav_slices[-1] - coverage = (n_samples - last_wav_range.start) / (last_wav_range.stop - last_wav_range.start) - if coverage < min_pad_coverage and len(mel_slices) > 1: - mel_slices = mel_slices[:-1] - wav_slices = wav_slices[:-1] - - return wav_slices, mel_slices - - -def embed_utterance(wav, using_partials=True, return_partials=False, **kwargs): - """ - Computes an embedding for a single utterance. - - # TODO: handle multiple wavs to benefit from batching on GPU - :param wav: a preprocessed (see audio.py) utterance waveform as a numpy array of float32 - :param using_partials: if True, then the utterance is split in partial utterances of - frames and the utterance embedding is computed from their - normalized average. If False, the utterance is instead computed from feeding the entire - spectogram to the network. - :param return_partials: if True, the partial embeddings will also be returned along with the - wav slices that correspond to the partial embeddings. - :param kwargs: additional arguments to compute_partial_splits() - :return: the embedding as a numpy array of float32 of shape (model_embedding_size,). If - is True, the partial utterances as a numpy array of float32 of shape - (n_partials, model_embedding_size) and the wav partials as a list of slices will also be - returned. If is simultaneously set to False, both these values will be None - instead. - """ - # Process the entire utterance if not using partials - if not using_partials: - frames = audio.wav_to_mel_spectrogram(wav) - embed = embed_frames_batch(frames[None, ...])[0] - if return_partials: - return embed, None, None - return embed - - # Compute where to split the utterance into partials and pad if necessary - wave_slices, mel_slices = compute_partial_slices(len(wav), **kwargs) - max_wave_length = wave_slices[-1].stop - if max_wave_length >= len(wav): - wav = np.pad(wav, (0, max_wave_length - len(wav)), "constant") - - # Split the utterance into partials - frames = audio.wav_to_mel_spectrogram(wav) - frames_batch = np.array([frames[s] for s in mel_slices]) - partial_embeds = embed_frames_batch(frames_batch) - - # Compute the utterance embedding from the partial embeddings - raw_embed = np.mean(partial_embeds, axis=0) - embed = raw_embed / np.linalg.norm(raw_embed, 2) - - if return_partials: - return embed, partial_embeds, wave_slices - return embed - - -def embed_speaker(wavs, **kwargs): - raise NotImplemented() - - -def plot_embedding_as_heatmap(embed, ax=None, title="", shape=None, color_range=(0, 0.30)): - if ax is None: - ax = plt.gca() - - if shape is None: - height = int(np.sqrt(len(embed))) - shape = (height, -1) - embed = embed.reshape(shape) - - cmap = cm.get_cmap() - mappable = ax.imshow(embed, cmap=cmap) - cbar = plt.colorbar(mappable, ax=ax, fraction=0.046, pad=0.04) - cbar.set_clim(*color_range) - - ax.set_xticks([]), ax.set_yticks([]) - ax.set_title(title) diff --git a/spaces/king007/OCR-Invoice-LayoutLMv3/README.md b/spaces/king007/OCR-Invoice-LayoutLMv3/README.md deleted file mode 100644 index 7047bfcd87a6b4aecc6bff4f29f73df26c8d12c5..0000000000000000000000000000000000000000 --- a/spaces/king007/OCR-Invoice-LayoutLMv3/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: OCR Invoice LayoutLMv3 -emoji: 🏢 -colorFrom: pink -colorTo: yellow -sdk: gradio -sdk_version: 3.9 -app_file: app.py -pinned: false -duplicated_from: jinhybr/OCR-Invoice-LayoutLMv3 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/kira4424/Tacotron-zero-short-voice-clone/synthesizer/hparams.py b/spaces/kira4424/Tacotron-zero-short-voice-clone/synthesizer/hparams.py deleted file mode 100644 index 8bcdb635a90a7700d4e133410268a897d3fd4a8c..0000000000000000000000000000000000000000 --- a/spaces/kira4424/Tacotron-zero-short-voice-clone/synthesizer/hparams.py +++ /dev/null @@ -1,110 +0,0 @@ -import ast -import pprint -import json - -class HParams(object): - def __init__(self, **kwargs): self.__dict__.update(kwargs) - def __setitem__(self, key, value): setattr(self, key, value) - def __getitem__(self, key): return getattr(self, key) - def __repr__(self): return pprint.pformat(self.__dict__) - - def parse(self, string): - # Overrides hparams from a comma-separated string of name=value pairs - if len(string) > 0: - overrides = [s.split("=") for s in string.split(",")] - keys, values = zip(*overrides) - keys = list(map(str.strip, keys)) - values = list(map(str.strip, values)) - for k in keys: - self.__dict__[k] = ast.literal_eval(values[keys.index(k)]) - return self - - def loadJson(self, dict): - print("\Loading the json with %s\n", dict) - for k in dict.keys(): - if k not in ["tts_schedule", "tts_finetune_layers"]: - self.__dict__[k] = dict[k] - return self - - def dumpJson(self, fp): - print("\Saving the json with %s\n", fp) - with fp.open("w", encoding="utf-8") as f: - json.dump(self.__dict__, f) - return self - -hparams = HParams( - ### Signal Processing (used in both synthesizer and vocoder) - sample_rate = 16000, - n_fft = 800, - num_mels = 80, - hop_size = 200, # Tacotron uses 12.5 ms frame shift (set to sample_rate * 0.0125) - win_size = 800, # Tacotron uses 50 ms frame length (set to sample_rate * 0.050) - fmin = 55, - min_level_db = -100, - ref_level_db = 20, - max_abs_value = 4., # Gradient explodes if too big, premature convergence if too small. - preemphasis = 0.97, # Filter coefficient to use if preemphasize is True - preemphasize = True, - - ### Tacotron Text-to-Speech (TTS) - tts_embed_dims = 512, # Embedding dimension for the graphemes/phoneme inputs - tts_encoder_dims = 256, - tts_decoder_dims = 128, - tts_postnet_dims = 512, - tts_encoder_K = 5, - tts_lstm_dims = 1024, - tts_postnet_K = 5, - tts_num_highways = 4, - tts_dropout = 0.5, - tts_cleaner_names = ["basic_cleaners"], - tts_stop_threshold = -3.4, # Value below which audio generation ends. - # For example, for a range of [-4, 4], this - # will terminate the sequence at the first - # frame that has all values < -3.4 - - ### Tacotron Training - tts_schedule = [(2, 1e-3, 10_000, 12), # Progressive training schedule - (2, 5e-4, 15_000, 12), # (r, lr, step, batch_size) - (2, 2e-4, 20_000, 12), # (r, lr, step, batch_size) - (2, 1e-4, 30_000, 12), # - (2, 5e-5, 40_000, 12), # - (2, 1e-5, 60_000, 12), # - (2, 5e-6, 160_000, 12), # r = reduction factor (# of mel frames - (2, 3e-6, 320_000, 12), # synthesized for each decoder iteration) - (2, 1e-6, 640_000, 12)], # lr = learning rate - - tts_clip_grad_norm = 1.0, # clips the gradient norm to prevent explosion - set to None if not needed - tts_eval_interval = 500, # Number of steps between model evaluation (sample generation) - # Set to -1 to generate after completing epoch, or 0 to disable - tts_eval_num_samples = 1, # Makes this number of samples - - ## For finetune usage, if set, only selected layers will be trained, available: encoder,encoder_proj,gst,decoder,postnet,post_proj - tts_finetune_layers = [], - - ### Data Preprocessing - max_mel_frames = 900, - rescale = True, - rescaling_max = 0.9, - synthesis_batch_size = 16, # For vocoder preprocessing and inference. - - ### Mel Visualization and Griffin-Lim - signal_normalization = True, - power = 1.5, - griffin_lim_iters = 60, - - ### Audio processing options - fmax = 7600, # Should not exceed (sample_rate // 2) - allow_clipping_in_normalization = True, # Used when signal_normalization = True - clip_mels_length = True, # If true, discards samples exceeding max_mel_frames - use_lws = False, # "Fast spectrogram phase recovery using local weighted sums" - symmetric_mels = True, # Sets mel range to [-max_abs_value, max_abs_value] if True, - # and [0, max_abs_value] if False - trim_silence = True, # Use with sample_rate of 16000 for best results - - ### SV2TTS - speaker_embedding_size = 256, # Dimension for the speaker embedding - silence_min_duration_split = 0.4, # Duration in seconds of a silence for an utterance to be split - utterance_min_duration = 1.6, # Duration in seconds below which utterances are discarded - use_gst = True, # Whether to use global style token - use_ser_for_gst = True, # Whether to use speaker embedding referenced for global style token - ) diff --git a/spaces/koajoel/PolyFormer/fairseq/examples/pointer_generator/README.xsum.md b/spaces/koajoel/PolyFormer/fairseq/examples/pointer_generator/README.xsum.md deleted file mode 100644 index ac3a8c3ddc96cd9810b45d49f6b361e43de1e9fb..0000000000000000000000000000000000000000 --- a/spaces/koajoel/PolyFormer/fairseq/examples/pointer_generator/README.xsum.md +++ /dev/null @@ -1,180 +0,0 @@ -## Training a pointer-generator model on the Extreme Summarization dataset - -##### 1. Download the Extreme Summarization data and preprocess it - -Follow the instructions [here](https://github.com/EdinburghNLP/XSum) to obtain -the original Extreme Summarization dataset. You should have six files, -{train,validation,test}.{document,summary}. - -##### 2. Create a vocabulary and extend it with source position markers - -```bash -vocab_size=10000 -position_markers=1000 -export LC_ALL=C -cat train.document train.summary | - tr -s '[:space:]' '\n' | - sort | - uniq -c | - sort -k1,1bnr -k2 | - head -n "$((vocab_size - 4))" | - awk '{ print $2 " " $1 }' >dict.pg.txt -python3 -c "[print(' 0'.format(n)) for n in range($position_markers)]" >>dict.pg.txt -``` - -This creates the file dict.pg.txt that contains the 10k most frequent words, -followed by 1k source position markers: - -``` -the 4954867 -. 4157552 -, 3439668 -to 2212159 -a 1916857 -of 1916820 -and 1823350 -... - 0 - 0 - 0 - 0 - 0 -... -``` - -##### 2. Preprocess the text data - -```bash -./preprocess.py --source train.document --target train.summary --vocab <(cut -d' ' -f1 dict.pg.txt) --source-out train.pg.src --target-out train.pg.tgt -./preprocess.py --source validation.document --target validation.summary --vocab <(cut -d' ' -f1 dict.pg.txt) --source-out valid.pg.src --target-out valid.pg.tgt -./preprocess.py --source test.document --vocab <(cut -d' ' -f1 dict.pg.txt) --source-out test.pg.src -``` - -The data should now contain `` tokens in place of out-of-vocabulary words. - -##### 3. Binarize the dataset: - -```bash -fairseq-preprocess \ - --source-lang src \ - --target-lang tgt \ - --trainpref train.pg \ - --validpref valid.pg \ - --destdir bin \ - --workers 60 \ - --srcdict dict.pg.txt \ - --joined-dictionary -``` - -##### 3. Train a model - -```bash -total_updates=20000 -warmup_updates=500 -lr=0.001 -max_tokens=4096 -update_freq=4 -pointer_layer=-2 - -CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 fairseq-train bin \ - --user-dir examples/pointer_generator/pointer_generator_src \ - --max-tokens "$max_tokens" \ - --task translation \ - --source-lang src --target-lang tgt \ - --truncate-source \ - --layernorm-embedding \ - --share-all-embeddings \ - --encoder-normalize-before \ - --decoder-normalize-before \ - --required-batch-size-multiple 1 \ - --arch transformer_pointer_generator \ - --alignment-layer "$pointer_layer" \ - --alignment-heads 1 \ - --source-position-markers 1000 \ - --criterion label_smoothed_cross_entropy \ - --label-smoothing 0.1 \ - --dropout 0.1 --attention-dropout 0.1 \ - --weight-decay 0.01 --optimizer adam --adam-betas "(0.9, 0.999)" --adam-eps 1e-08 \ - --clip-norm 0.1 \ - --lr-scheduler inverse_sqrt --lr "$lr" --max-update "$total_updates" --warmup-updates "$warmup_updates" \ - --update-freq "$update_freq" \ - --skip-invalid-size-inputs-valid-test -``` - -Above we specify that our dictionary contains 1000 source position markers, and -that we want to use one attention head from the penultimate decoder layer for -pointing. It should run in 5.5 hours on one node with eight 32GB V100 GPUs. The -logged messages confirm that dictionary indices above 10000 will be mapped to -the `` embedding: - -``` -2020-09-24 20:43:53 | INFO | fairseq.tasks.translation | [src] dictionary: 11000 types -2020-09-24 20:43:53 | INFO | fairseq.tasks.translation | [tgt] dictionary: 11000 types -2020-09-24 20:43:53 | INFO | fairseq.data.data_utils | loaded 11332 examples from: bin/valid.src-tgt.src -2020-09-24 20:43:53 | INFO | fairseq.data.data_utils | loaded 11332 examples from: bin/valid.src-tgt.tgt -2020-09-24 20:43:53 | INFO | fairseq.tasks.translation | bin valid src-tgt 11332 examples -2020-09-24 20:43:53 | INFO | fairseq.models.transformer_pg | dictionary indices from 10000 to 10999 will be mapped to 3 -``` - -##### 4. Summarize the test sequences - -```bash -batch_size=32 -beam_size=6 -max_length=60 -length_penalty=1.0 - -fairseq-interactive bin \ - --user-dir examples/pointer_generator/pointer_generator_src \ - --batch-size "$batch_size" \ - --task translation \ - --source-lang src --target-lang tgt \ - --path checkpoints/checkpoint_last.pt \ - --input test.pg.src \ - --buffer-size 200 \ - --max-len-a 0 \ - --max-len-b "$max_length" \ - --lenpen "$length_penalty" \ - --beam "$beam_size" \ - --skip-invalid-size-inputs-valid-test | - tee generate.out -grep ^H generate.out | cut -f 3- >generate.hyp -``` - -Now you should have the generated sequences in `generate.hyp`. They contain -`` tokens that the model has copied from the source sequence. In order to -retrieve the original words, we need the unprocessed source sequences from -`test.document`. - -##### 5. Process the generated output - -Since we skipped too long inputs when producing `generate.hyp`, we also have to -skip too long sequences now that we read `test.document`. - -```bash -./postprocess.py \ - --source <(awk 'NF<1024' test.document) \ - --target generate.hyp \ - --target-out generate.hyp.processed -``` - -Now you'll find the final sequences from `generate.hyp.processed`, with -`` replaced with the original word from the source sequence. - -##### An example of a summarized sequence - -The original source document in `test.document`: - -> de roon moved to teesside in june 2016 for an initial # 8.8 m fee and played 33 premier league games last term . the netherlands international , 26 , scored five goals in 36 league and cup games during his spell at boro . meanwhile , manager garry monk confirmed the championship club 's interest in signing chelsea midfielder lewis baker . `` he 's a target and one of many that we 've had throughout the summer months , '' said monk . find all the latest football transfers on our dedicated page . - -The preprocessed source document in `test.src.pg`: - -> de \ moved to \ in june 2016 for an initial # \ m fee and played 33 premier league games last term . the netherlands international , 26 , scored five goals in 36 league and cup games during his spell at boro . meanwhile , manager garry monk confirmed the championship club 's interest in signing chelsea midfielder lewis baker . `` he 's a target and one of many that we 've had throughout the summer months , '' said monk . find all the latest football transfers on our dedicated page . - -The generated summary in `generate.hyp`: - -> middlesbrough striker \ de \ has joined spanish side \ on a season-long loan . - -The generated summary after postprocessing in `generate.hyp.processed`: - -> middlesbrough striker \ de roon has joined spanish side \ on a season-long loan . diff --git a/spaces/kquote03/lama-video-watermark-remover/saicinpainting/training/losses/distance_weighting.py b/spaces/kquote03/lama-video-watermark-remover/saicinpainting/training/losses/distance_weighting.py deleted file mode 100644 index 93052003b1e47fd663c70aedcecd144171f49204..0000000000000000000000000000000000000000 --- a/spaces/kquote03/lama-video-watermark-remover/saicinpainting/training/losses/distance_weighting.py +++ /dev/null @@ -1,126 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F -import torchvision - -from saicinpainting.training.losses.perceptual import IMAGENET_STD, IMAGENET_MEAN - - -def dummy_distance_weighter(real_img, pred_img, mask): - return mask - - -def get_gauss_kernel(kernel_size, width_factor=1): - coords = torch.stack(torch.meshgrid(torch.arange(kernel_size), - torch.arange(kernel_size)), - dim=0).float() - diff = torch.exp(-((coords - kernel_size // 2) ** 2).sum(0) / kernel_size / width_factor) - diff /= diff.sum() - return diff - - -class BlurMask(nn.Module): - def __init__(self, kernel_size=5, width_factor=1): - super().__init__() - self.filter = nn.Conv2d(1, 1, kernel_size, padding=kernel_size // 2, padding_mode='replicate', bias=False) - self.filter.weight.data.copy_(get_gauss_kernel(kernel_size, width_factor=width_factor)) - - def forward(self, real_img, pred_img, mask): - with torch.no_grad(): - result = self.filter(mask) * mask - return result - - -class EmulatedEDTMask(nn.Module): - def __init__(self, dilate_kernel_size=5, blur_kernel_size=5, width_factor=1): - super().__init__() - self.dilate_filter = nn.Conv2d(1, 1, dilate_kernel_size, padding=dilate_kernel_size// 2, padding_mode='replicate', - bias=False) - self.dilate_filter.weight.data.copy_(torch.ones(1, 1, dilate_kernel_size, dilate_kernel_size, dtype=torch.float)) - self.blur_filter = nn.Conv2d(1, 1, blur_kernel_size, padding=blur_kernel_size // 2, padding_mode='replicate', bias=False) - self.blur_filter.weight.data.copy_(get_gauss_kernel(blur_kernel_size, width_factor=width_factor)) - - def forward(self, real_img, pred_img, mask): - with torch.no_grad(): - known_mask = 1 - mask - dilated_known_mask = (self.dilate_filter(known_mask) > 1).float() - result = self.blur_filter(1 - dilated_known_mask) * mask - return result - - -class PropagatePerceptualSim(nn.Module): - def __init__(self, level=2, max_iters=10, temperature=500, erode_mask_size=3): - super().__init__() - vgg = torchvision.models.vgg19(pretrained=True).features - vgg_avg_pooling = [] - - for weights in vgg.parameters(): - weights.requires_grad = False - - cur_level_i = 0 - for module in vgg.modules(): - if module.__class__.__name__ == 'Sequential': - continue - elif module.__class__.__name__ == 'MaxPool2d': - vgg_avg_pooling.append(nn.AvgPool2d(kernel_size=2, stride=2, padding=0)) - else: - vgg_avg_pooling.append(module) - if module.__class__.__name__ == 'ReLU': - cur_level_i += 1 - if cur_level_i == level: - break - - self.features = nn.Sequential(*vgg_avg_pooling) - - self.max_iters = max_iters - self.temperature = temperature - self.do_erode = erode_mask_size > 0 - if self.do_erode: - self.erode_mask = nn.Conv2d(1, 1, erode_mask_size, padding=erode_mask_size // 2, bias=False) - self.erode_mask.weight.data.fill_(1) - - def forward(self, real_img, pred_img, mask): - with torch.no_grad(): - real_img = (real_img - IMAGENET_MEAN.to(real_img)) / IMAGENET_STD.to(real_img) - real_feats = self.features(real_img) - - vertical_sim = torch.exp(-(real_feats[:, :, 1:] - real_feats[:, :, :-1]).pow(2).sum(1, keepdim=True) - / self.temperature) - horizontal_sim = torch.exp(-(real_feats[:, :, :, 1:] - real_feats[:, :, :, :-1]).pow(2).sum(1, keepdim=True) - / self.temperature) - - mask_scaled = F.interpolate(mask, size=real_feats.shape[-2:], mode='bilinear', align_corners=False) - if self.do_erode: - mask_scaled = (self.erode_mask(mask_scaled) > 1).float() - - cur_knowness = 1 - mask_scaled - - for iter_i in range(self.max_iters): - new_top_knowness = F.pad(cur_knowness[:, :, :-1] * vertical_sim, (0, 0, 1, 0), mode='replicate') - new_bottom_knowness = F.pad(cur_knowness[:, :, 1:] * vertical_sim, (0, 0, 0, 1), mode='replicate') - - new_left_knowness = F.pad(cur_knowness[:, :, :, :-1] * horizontal_sim, (1, 0, 0, 0), mode='replicate') - new_right_knowness = F.pad(cur_knowness[:, :, :, 1:] * horizontal_sim, (0, 1, 0, 0), mode='replicate') - - new_knowness = torch.stack([new_top_knowness, new_bottom_knowness, - new_left_knowness, new_right_knowness], - dim=0).max(0).values - - cur_knowness = torch.max(cur_knowness, new_knowness) - - cur_knowness = F.interpolate(cur_knowness, size=mask.shape[-2:], mode='bilinear') - result = torch.min(mask, 1 - cur_knowness) - - return result - - -def make_mask_distance_weighter(kind='none', **kwargs): - if kind == 'none': - return dummy_distance_weighter - if kind == 'blur': - return BlurMask(**kwargs) - if kind == 'edt': - return EmulatedEDTMask(**kwargs) - if kind == 'pps': - return PropagatePerceptualSim(**kwargs) - raise ValueError(f'Unknown mask distance weighter kind {kind}') diff --git a/spaces/krisnadwipaj/interactive-dashboard/README.md b/spaces/krisnadwipaj/interactive-dashboard/README.md deleted file mode 100644 index 9f42b75a95389a52162025168f554ba4f5fa52cf..0000000000000000000000000000000000000000 --- a/spaces/krisnadwipaj/interactive-dashboard/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: TugasTambahan -emoji: 🦀 -colorFrom: purple -colorTo: blue -sdk: streamlit -sdk_version: 1.10.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/kukuhtw/VToonify/vtoonify/model/vtoonify.py b/spaces/kukuhtw/VToonify/vtoonify/model/vtoonify.py deleted file mode 100644 index 6556a0a6c734be5f413f4683eb63c44f449c6af8..0000000000000000000000000000000000000000 --- a/spaces/kukuhtw/VToonify/vtoonify/model/vtoonify.py +++ /dev/null @@ -1,286 +0,0 @@ -import torch -import numpy as np -import math -from torch import nn -from model.stylegan.model import ConvLayer, EqualLinear, Generator, ResBlock -from model.dualstylegan import AdaptiveInstanceNorm, AdaResBlock, DualStyleGAN -import torch.nn.functional as F - -# IC-GAN: stylegan discriminator -class ConditionalDiscriminator(nn.Module): - def __init__(self, size, channel_multiplier=2, blur_kernel=[1, 3, 3, 1], use_condition=False, style_num=None): - super().__init__() - - channels = { - 4: 512, - 8: 512, - 16: 512, - 32: 512, - 64: 256 * channel_multiplier, - 128: 128 * channel_multiplier, - 256: 64 * channel_multiplier, - 512: 32 * channel_multiplier, - 1024: 16 * channel_multiplier, - } - - convs = [ConvLayer(3, channels[size], 1)] - - log_size = int(math.log(size, 2)) - - in_channel = channels[size] - - for i in range(log_size, 2, -1): - out_channel = channels[2 ** (i - 1)] - - convs.append(ResBlock(in_channel, out_channel, blur_kernel)) - - in_channel = out_channel - - self.convs = nn.Sequential(*convs) - - self.stddev_group = 4 - self.stddev_feat = 1 - self.use_condition = use_condition - - if self.use_condition: - self.condition_dim = 128 - # map style degree to 64-dimensional vector - self.label_mapper = nn.Sequential( - nn.Linear(1, 64), - nn.LeakyReLU(negative_slope=0.2, inplace=True), - nn.Linear(64, 64), - nn.LeakyReLU(negative_slope=0.2, inplace=True), - nn.Linear(64, self.condition_dim//2), - ) - # map style code index to 64-dimensional vector - self.style_mapper = nn.Embedding(style_num, self.condition_dim-self.condition_dim//2) - else: - self.condition_dim = 1 - - self.final_conv = ConvLayer(in_channel + 1, channels[4], 3) - self.final_linear = nn.Sequential( - EqualLinear(channels[4] * 4 * 4, channels[4], activation="fused_lrelu"), - EqualLinear(channels[4], self.condition_dim), - ) - - def forward(self, input, degree_label=None, style_ind=None): - out = self.convs(input) - - batch, channel, height, width = out.shape - group = min(batch, self.stddev_group) - stddev = out.view( - group, -1, self.stddev_feat, channel // self.stddev_feat, height, width - ) - stddev = torch.sqrt(stddev.var(0, unbiased=False) + 1e-8) - stddev = stddev.mean([2, 3, 4], keepdims=True).squeeze(2) - stddev = stddev.repeat(group, 1, height, width) - out = torch.cat([out, stddev], 1) - - out = self.final_conv(out) - out = out.view(batch, -1) - - if self.use_condition: - h = self.final_linear(out) - condition = torch.cat((self.label_mapper(degree_label), self.style_mapper(style_ind)), dim=1) - out = (h * condition).sum(dim=1, keepdim=True) * (1 / np.sqrt(self.condition_dim)) - else: - out = self.final_linear(out) - - return out - - -class VToonifyResBlock(nn.Module): - def __init__(self, fin): - super().__init__() - - self.conv = nn.Conv2d(fin, fin, 3, 1, 1) - self.conv2 = nn.Conv2d(fin, fin, 3, 1, 1) - self.lrelu = nn.LeakyReLU(negative_slope=0.2, inplace=True) - - def forward(self, x): - out = self.lrelu(self.conv(x)) - out = self.lrelu(self.conv2(out)) - out = (out + x) / math.sqrt(2) - return out - -class Fusion(nn.Module): - def __init__(self, in_channels, skip_channels, out_channels): - super().__init__() - - # create conv layers - self.conv = nn.Conv2d(in_channels + skip_channels, out_channels, 3, 1, 1, bias=True) - self.norm = AdaptiveInstanceNorm(in_channels + skip_channels, 128) - self.conv2 = nn.Conv2d(in_channels + skip_channels, 1, 3, 1, 1, bias=True) - #''' - self.linear = nn.Sequential( - nn.Linear(1, 64), - nn.LeakyReLU(negative_slope=0.2, inplace=True), - nn.Linear(64, 128), - nn.LeakyReLU(negative_slope=0.2, inplace=True) - ) - - def forward(self, f_G, f_E, d_s=1): - # label of style degree - label = self.linear(torch.zeros(f_G.size(0),1).to(f_G.device) + d_s) - out = torch.cat([f_G, abs(f_G-f_E)], dim=1) - m_E = (F.relu(self.conv2(self.norm(out, label)))).tanh() - f_out = self.conv(torch.cat([f_G, f_E * m_E], dim=1)) - return f_out, m_E - -class VToonify(nn.Module): - def __init__(self, - in_size=256, - out_size=1024, - img_channels=3, - style_channels=512, - num_mlps=8, - channel_multiplier=2, - num_res_layers=6, - backbone = 'dualstylegan', - ): - - super().__init__() - - self.backbone = backbone - if self.backbone == 'dualstylegan': - # DualStyleGAN, with weights being fixed - self.generator = DualStyleGAN(out_size, style_channels, num_mlps, channel_multiplier) - else: - # StyleGANv2, with weights being fixed - self.generator = Generator(out_size, style_channels, num_mlps, channel_multiplier) - - self.in_size = in_size - self.style_channels = style_channels - channels = self.generator.channels - - # encoder - num_styles = int(np.log2(out_size)) * 2 - 2 - encoder_res = [2**i for i in range(int(np.log2(in_size)), 4, -1)] - self.encoder = nn.ModuleList() - self.encoder.append( - nn.Sequential( - nn.Conv2d(img_channels+19, 32, 3, 1, 1, bias=True), - nn.LeakyReLU(negative_slope=0.2, inplace=True), - nn.Conv2d(32, channels[in_size], 3, 1, 1, bias=True), - nn.LeakyReLU(negative_slope=0.2, inplace=True))) - - for res in encoder_res: - in_channels = channels[res] - if res > 32: - out_channels = channels[res // 2] - block = nn.Sequential( - nn.Conv2d(in_channels, out_channels, 3, 2, 1, bias=True), - nn.LeakyReLU(negative_slope=0.2, inplace=True), - nn.Conv2d(out_channels, out_channels, 3, 1, 1, bias=True), - nn.LeakyReLU(negative_slope=0.2, inplace=True)) - self.encoder.append(block) - else: - layers = [] - for _ in range(num_res_layers): - layers.append(VToonifyResBlock(in_channels)) - self.encoder.append(nn.Sequential(*layers)) - block = nn.Conv2d(in_channels, img_channels, 1, 1, 0, bias=True) - self.encoder.append(block) - - # trainable fusion module - self.fusion_out = nn.ModuleList() - self.fusion_skip = nn.ModuleList() - for res in encoder_res[::-1]: - num_channels = channels[res] - if self.backbone == 'dualstylegan': - self.fusion_out.append( - Fusion(num_channels, num_channels, num_channels)) - else: - self.fusion_out.append( - nn.Conv2d(num_channels * 2, num_channels, 3, 1, 1, bias=True)) - - self.fusion_skip.append( - nn.Conv2d(num_channels + 3, 3, 3, 1, 1, bias=True)) - - # Modified ModRes blocks in DualStyleGAN, with weights being fixed - if self.backbone == 'dualstylegan': - self.res = nn.ModuleList() - self.res.append(AdaResBlock(self.generator.channels[2 ** 2])) # for conv1, no use in this model - for i in range(3, 6): - out_channel = self.generator.channels[2 ** i] - self.res.append(AdaResBlock(out_channel, dilation=2**(5-i))) - self.res.append(AdaResBlock(out_channel, dilation=2**(5-i))) - - - def forward(self, x, style, d_s=None, return_mask=False, return_feat=False): - # map style to W+ space - if style is not None and style.ndim < 3: - if self.backbone == 'dualstylegan': - resstyles = self.generator.style(style).unsqueeze(1).repeat(1, self.generator.n_latent, 1) - adastyles = style.unsqueeze(1).repeat(1, self.generator.n_latent, 1) - elif style is not None: - nB, nL, nD = style.shape - if self.backbone == 'dualstylegan': - resstyles = self.generator.style(style.reshape(nB*nL, nD)).reshape(nB, nL, nD) - adastyles = style - if self.backbone == 'dualstylegan': - adastyles = adastyles.clone() - for i in range(7, self.generator.n_latent): - adastyles[:, i] = self.generator.res[i](adastyles[:, i]) - - # obtain multi-scale content features - feat = x - encoder_features = [] - # downsampling conv parts of E - for block in self.encoder[:-2]: - feat = block(feat) - encoder_features.append(feat) - encoder_features = encoder_features[::-1] - # Resblocks in E - for ii, block in enumerate(self.encoder[-2]): - feat = block(feat) - # adjust Resblocks with ModRes blocks - if self.backbone == 'dualstylegan': - feat = self.res[ii+1](feat, resstyles[:, ii+1], d_s) - # the last-layer feature of E (inputs of backbone) - out = feat - skip = self.encoder[-1](feat) - if return_feat: - return out, skip - - # 32x32 ---> higher res - _index = 1 - m_Es = [] - for conv1, conv2, to_rgb in zip( - self.stylegan().convs[6::2], self.stylegan().convs[7::2], self.stylegan().to_rgbs[3:]): - - # pass the mid-layer features of E to the corresponding resolution layers of G - if 2 ** (5+((_index-1)//2)) <= self.in_size: - fusion_index = (_index - 1) // 2 - f_E = encoder_features[fusion_index] - - if self.backbone == 'dualstylegan': - out, m_E = self.fusion_out[fusion_index](out, f_E, d_s) - skip = self.fusion_skip[fusion_index](torch.cat([skip, f_E*m_E], dim=1)) - m_Es += [m_E] - else: - out = self.fusion_out[fusion_index](torch.cat([out, f_E], dim=1)) - skip = self.fusion_skip[fusion_index](torch.cat([skip, f_E], dim=1)) - - # remove the noise input - batch, _, height, width = out.shape - noise = x.new_empty(batch, 1, height * 2, width * 2).normal_().detach() * 0.0 - - out = conv1(out, adastyles[:, _index+6], noise=noise) - out = conv2(out, adastyles[:, _index+7], noise=noise) - skip = to_rgb(out, adastyles[:, _index+8], skip) - _index += 2 - - image = skip - if return_mask and self.backbone == 'dualstylegan': - return image, m_Es - return image - - def stylegan(self): - if self.backbone == 'dualstylegan': - return self.generator.generator - else: - return self.generator - - def zplus2wplus(self, zplus): - return self.stylegan().style(zplus.reshape(zplus.shape[0]*zplus.shape[1], zplus.shape[2])).reshape(zplus.shape) \ No newline at end of file diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/gradio/templates/frontend/assets/index-6acaa952.css b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/gradio/templates/frontend/assets/index-6acaa952.css deleted file mode 100644 index 14e404a17a006e0cc8dd1c7e51df22ea863e0a66..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/gradio/templates/frontend/assets/index-6acaa952.css +++ /dev/null @@ -1 +0,0 @@ -.input-number.svelte-x6nxfm{transition:.15s;box-shadow:var(--shadow-drop);background:var(--background-fill-secondary)}.input-number.svelte-x6nxfm:hover{box-shadow:var(--shadow-drop-lg)}.range.svelte-x6nxfm{display:flex}.item.svelte-x6nxfm{flex:1 1 0%}.dropdown-menu.svelte-1cqwepf{box-shadow:var(--shadow-drop)}.dropdown-item.svelte-1cqwepf{display:block;transition:.15s;cursor:pointer;background:var(--background-fill-primary);padding:var(--size-2) var(--size-3);white-space:nowrap}.dropdown-item.svelte-1cqwepf:first-child{border-top-right-radius:var(--radius-md);border-top-left-radius:var(--radius-md)}.dropdown-item.svelte-1cqwepf:last-child{border-bottom-right-radius:var(--radius-md);border-bottom-left-radius:var(--radius-md)}.dropdown-item.svelte-1cqwepf:hover{font-weight:var(--weight-semibold)}.input-checkbox.svelte-1nw19ca.svelte-1nw19ca{display:inline-block}svg.svelte-1nw19ca.svelte-1nw19ca{width:var(--size-4);height:var(--size-3)}.selected.svelte-1nw19ca svg.svelte-1nw19ca{opacity:1}.input-checkbox.svelte-1nw19ca.svelte-1nw19ca{display:flex;gap:var(--size-1);cursor:pointer;border-radius:var(--radius-md);padding:var(--size-2) var(--size-3)}.checkbox.svelte-1nw19ca.svelte-1nw19ca{display:flex;justify-content:center;align-items:center;border:1px solid var(--border-color-primary);background:var(--background-fill-primary);width:var(--size-4);height:var(--size-4)}.checkbox-item.svelte-1nw19ca.svelte-1nw19ca{transition:.15s;box-shadow:var(--shadow-drop);background:var(--background-fill-primary)}.checkbox-item.svelte-1nw19ca.svelte-1nw19ca:hover{box-shadow:var(--shadow-drop-lg)}.checkbox-item.selected.svelte-1nw19ca.svelte-1nw19ca{background:var(--color-accent-base);color:#fff}svg.svelte-1cbhr6k.svelte-1cbhr6k{width:var(--size-4);height:var(--size-3)}.selected.svelte-1cbhr6k svg.svelte-1cbhr6k{opacity:1}.input-checkbox-group.svelte-1cbhr6k.svelte-1cbhr6k{display:flex;flex-wrap:wrap;gap:var(--size-2)}.checkbox-item.svelte-1cbhr6k.svelte-1cbhr6k{display:flex;align-items:center;gap:var(--size-1);transition:.15s;cursor:pointer;box-shadow:var(--shadow-drop);border-radius:var(--radius-md);background:var(--background-fill-primary);padding:var(--size-2) var(--size-3);font-weight:var(--weight-semibold)}.checkbox-item.svelte-1cbhr6k.svelte-1cbhr6k:hover{box-shadow:var(--shadow-drop-lg)}.checkbox.svelte-1cbhr6k.svelte-1cbhr6k{display:flex;justify-content:center;align-items:center;border:1px solid var(--border-color-primary);background:var(--background-fill-primary);width:var(--size-4);height:var(--size-4)}.selected.svelte-1cbhr6k .checkbox.svelte-1cbhr6k{background:var(--color-accent-base)}.checkbox-item.svelte-1cbhr6k.svelte-1cbhr6k{transition:.15s;box-shadow:var(--shadow-drop);background:var(--background-fill-primary)}.checkbox-item.selected.svelte-1cbhr6k.svelte-1cbhr6k{background:var(--color-accent-base);color:#fff}input.svelte-1sxprr7.svelte-1sxprr7::-webkit-slider-thumb,.range.svelte-1sxprr7.svelte-1sxprr7::-moz-range-thumb{-webkit-appearance:none;appearance:none;cursor:pointer;border-radius:var(--radius-md);width:var(--size-5);height:var(--size-5)}.input-slider.svelte-1sxprr7.svelte-1sxprr7{text-align:center}.range.svelte-1sxprr7.svelte-1sxprr7{display:flex}input.svelte-1sxprr7.svelte-1sxprr7{transition:.15s;box-shadow:var(--shadow-drop);border-radius:var(--radius-md);background:var(--background-fill-primary);width:var(--size-full);height:var(--size-3)}input.svelte-1sxprr7.svelte-1sxprr7:hover{box-shadow:var(--shadow-drop-lg)}input.svelte-1sxprr7.svelte-1sxprr7::-webkit-slider-thumb,input.svelte-1sxprr7.svelte-1sxprr7::-moz-range-thumb{box-shadow:var(--shadow-drop);background:linear-gradient(to bottom,var(--color-orange-300),var(--color-orange-500))}.original.svelte-1sxprr7.svelte-1sxprr7{display:inline-block;margin:var(--size-1) auto;border-radius:var(--radius-md);padding:var(--size-0-5) var(--size-2)}.range.svelte-1sxprr7>div.svelte-1sxprr7{flex:1 1 0%;height:var(--size-4)}.input-radio.svelte-1nekfre{display:flex;flex-wrap:wrap;gap:var(--size-2)}.radio-item.svelte-1nekfre{display:flex;align-items:center;gap:var(--size-2);transition:.15s;cursor:pointer;border-radius:var(--radius-md);background:var(--background-fill-primary);padding:var(--size-2) var(--size-3);font-weight:var(--weight-semibold)}.radio-item.svelte-1nekfre:hover{box-shadow:var(--shadow-drop-lg)}.radio-circle.svelte-1nekfre{box-sizing:border-box;border-radius:var(--radius-full);width:var(--size-4);height:var(--size-4)}.radio-item.selected.svelte-1nekfre{box-shadow:var(--shadow-drop);background:var(--color-accent-base);color:#fff}.image-preview.svelte-h0dntu{display:flex;position:relative;justify-content:center;align-items:center;background:var(--background-fill-primary);width:var(--size-full);height:var(--size-60)}.interpretation.svelte-h0dntu{display:flex;position:absolute;top:0;left:0;justify-content:center;align-items:center;opacity:.9;transition:.15s;width:var(--size-full);height:var(--size-full)}.interpretation.svelte-h0dntu:hover{opacity:.2}img.svelte-h0dntu{width:var(--size-full);height:var(--size-full);object-fit:contain}.range.svelte-13lmfcp{display:flex}.item.svelte-13lmfcp{display:flex;height:var(--size-4)}.input-text.svelte-15c0u2m{border-radius:var(--radius-md);padding:var(--size-2);width:var(--size-full);overflow-wrap:break-word}.text-span.svelte-15c0u2m{padding:var(--size-1)} diff --git a/spaces/lIlIlllllmeng/QQsign1/README.md b/spaces/lIlIlllllmeng/QQsign1/README.md deleted file mode 100644 index 3042be806844c4b6d92719e8afaa17d09c970d46..0000000000000000000000000000000000000000 --- a/spaces/lIlIlllllmeng/QQsign1/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: QQsign -emoji: 🦀 -colorFrom: blue -colorTo: purple -sdk: docker -pinned: false -license: mit -duplicated_from: CikeyQI/QQsign ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/langvision/ChatWeb/_next/static/chunks/app/layout-8751392d4d221e52.js b/spaces/langvision/ChatWeb/_next/static/chunks/app/layout-8751392d4d221e52.js deleted file mode 100644 index 18bffb92ec4b4a03f476faf4af376f652a6ad910..0000000000000000000000000000000000000000 --- a/spaces/langvision/ChatWeb/_next/static/chunks/app/layout-8751392d4d221e52.js +++ /dev/null @@ -1 +0,0 @@ -(self.webpackChunk_N_E=self.webpackChunk_N_E||[]).push([[185],{35264:function(n,e,u){Promise.resolve().then(u.t.bind(u,98410,23))},98410:function(){}},function(n){n.O(0,[253,698,744],function(){return n(n.s=35264)}),_N_E=n.O()}]); \ No newline at end of file diff --git a/spaces/librarian-bots/new_hub_datasets/README.md b/spaces/librarian-bots/new_hub_datasets/README.md deleted file mode 100644 index 7dc3cbac4dfc1c68220f7fd17142e8b5cfebc5fa..0000000000000000000000000000000000000000 --- a/spaces/librarian-bots/new_hub_datasets/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Recent Hugging Face Datasets -emoji: 🦀 -colorFrom: purple -colorTo: gray -sdk: gradio -sdk_version: 3.47.1 -app_file: app.py -pinned: false -python_version: 3.11.6 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/lifan0127/zotero-qa/style.css b/spaces/lifan0127/zotero-qa/style.css deleted file mode 100644 index ffbe093047313f244af8be541b117e10ff29f3cf..0000000000000000000000000000000000000000 --- a/spaces/lifan0127/zotero-qa/style.css +++ /dev/null @@ -1,17 +0,0 @@ - #zotero-library-type label { - width: 48.5%; - } - - #zotero-collection label { - width: 100%; - display: block; - } - - .zotero-link { - font-size: 0.85rem; - color: #2d7ea9; - } - - #answer .generating{ - display: none; - } diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/Camtasia Studio 9 Anahtar [TOP].md b/spaces/lincquiQcaudo/Top-20-Diffusion/Camtasia Studio 9 Anahtar [TOP].md deleted file mode 100644 index a7d818394136cd2db73d36dc0143b5e1dbfb7e81..0000000000000000000000000000000000000000 --- a/spaces/lincquiQcaudo/Top-20-Diffusion/Camtasia Studio 9 Anahtar [TOP].md +++ /dev/null @@ -1,7 +0,0 @@ -
                                                                                                                                              -

                                                                                                                                              the impact of camtasia studio 2019 keygen on the world of business is never seen before. this software has changed the way business was handled earlier. the users can record screen activity from any pc in a sequence so they can be able to make use of it afterwards. camtasia studio 5.5.0 crack is one of the most popular programs for recording videos. it is the best program for showing your work in the market. its used by many people in the world. it has different purposes. it allows you to record the activities on the screen. the user can record the sound from microphone, and other activities like video, animations, webpages, and documents. it helps you to convert your audio files,video files, images, documents, webpages, mp3, and many other formats. it is very easy to use for beginners. it has many new tools which make your work easy and effective.

                                                                                                                                              -

                                                                                                                                              camtasia studio 9 anahtar


                                                                                                                                              Download Filehttps://bytlly.com/2uGxvt



                                                                                                                                              -

                                                                                                                                              the camtasia user can provide, discuss, and export the complete screen video as well as audio recording which they created. it supports all types of windows like mac, linux, and also other platforms. the new addition to camtasia studio 2020 key will convert your audio recordings to mp3 format. it records both webcam and microphone in three high resolutions. it has many special plugins which help you to record the activities smoothly. you can search for any recording tool on the internet.

                                                                                                                                              -

                                                                                                                                              the camtasia studio 2018 software is now updated and more powerful. it allows you to record, record, and edit the sound and video in a single app. it also allows you to convert the audios, videos, photos, webpages, and other files on your pc. it provides you complete control over your video and audio files through the camtasia studio license key. the audio editing tools in camtasia studio 2018 are really easy to use. it records both webcam and microphone. camtasia is a powerful tool for creating tutorials, demos, training modules, audio, and video recordings.

                                                                                                                                              899543212b
                                                                                                                                              -
                                                                                                                                              -
                                                                                                                                              \ No newline at end of file diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/Essay About The Beauty Of Palawan.md b/spaces/lincquiQcaudo/Top-20-Diffusion/Essay About The Beauty Of Palawan.md deleted file mode 100644 index 173fe9e24012a682e5f1e2619d30b0b10d35fa57..0000000000000000000000000000000000000000 --- a/spaces/lincquiQcaudo/Top-20-Diffusion/Essay About The Beauty Of Palawan.md +++ /dev/null @@ -1,21 +0,0 @@ -
                                                                                                                                              -

                                                                                                                                              Essay About The Beauty Of Palawan

                                                                                                                                              -

                                                                                                                                              Palawan is a province in the Philippines that is known for its natural beauty and biodiversity. It is often called the "last ecological frontier" of the country because of its pristine forests, beaches, and coral reefs. Palawan is also home to many cultural and historical attractions, such as the Puerto Princesa Subterranean River National Park, which is a UNESCO World Heritage Site and one of the New 7 Wonders of Nature. In this essay, I will explore some of the reasons why Palawan is a beautiful and fascinating destination for travelers.

                                                                                                                                              - -

                                                                                                                                              The Natural Wonders of Palawan

                                                                                                                                              -

                                                                                                                                              One of the main reasons why Palawan is a beautiful place to visit is its diverse and stunning natural scenery. Palawan has a long coastline that stretches for more than 2,000 kilometers, offering many opportunities for swimming, snorkeling, diving, and island hopping. Some of the most popular islands in Palawan are Coron, El Nido, and Balabac, which have crystal-clear waters, white-sand beaches, and limestone cliffs. Palawan also has many inland attractions, such as mountains, waterfalls, caves, and lakes. One of the most famous examples is the Puerto Princesa Subterranean River, which is an 8.2-kilometer long underground river that flows directly to the sea. The river has many unique features, such as stalactites, stalagmites, and a 20-million-year-old fossil of a sea cow.

                                                                                                                                              -

                                                                                                                                              Essay About The Beauty Of Palawan


                                                                                                                                              Download File ○○○ https://bytlly.com/2uGwGi



                                                                                                                                              - -

                                                                                                                                              The Biodiversity of Palawan

                                                                                                                                              -

                                                                                                                                              Another reason why Palawan is a beautiful place to visit is its rich and varied biodiversity. Palawan has many different ecosystems, such as tropical rainforests, mangroves, seagrass beds, and coral reefs. These ecosystems support a wide range of flora and fauna, many of which are endemic or endangered. For example, Palawan has more than 600 species of butterflies, 279 species of birds, 58 species of mammals, and 379 species of corals. Some of the most iconic animals in Palawan are the Philippine cockatoo, the Palawan peacock-pheasant, the Philippine mouse-deer, the dugong, and the Philippine crocodile. Palawan also has many indigenous plants, such as orchids, pitcher plants, and rafflesia.

                                                                                                                                              - -

                                                                                                                                              The Culture and History of Palawan

                                                                                                                                              -

                                                                                                                                              A third reason why Palawan is a beautiful place to visit is its rich and diverse culture and history. Palawan has been inhabited by various groups of people for thousands of years, such as the Tagbanua, the Palaw'an, and the Batak. These groups have their own languages, traditions, beliefs, and arts. They also have a close relationship with nature and practice sustainable ways of living. Palawan also has many historical sites that reflect its colonial past, such as churches, forts, museums, and shrines. Some of the most notable examples are the Immaculate Conception Cathedral in Puerto Princesa City, the Culion Leper Colony in Coron Island, and the Tabon Caves in Quezon Municipality.

                                                                                                                                              - -

                                                                                                                                              Conclusion

                                                                                                                                              -

                                                                                                                                              In conclusion, Palawan is a beautiful place to visit because of its natural wonders, its biodiversity, and its culture and history. It is a destination that offers something for everyone, whether they are looking for adventure, relaxation, or education. Palawan is truly a paradise on earth that deserves to be explored and appreciated.

                                                                                                                                              -

                                                                                                                                              Conclusion

                                                                                                                                              -

                                                                                                                                              In conclusion, Palawan is a beautiful place to visit because of its natural wonders, its biodiversity, and its culture and history. It is a destination that offers something for everyone, whether they are looking for adventure, relaxation, or education. Palawan is truly a paradise on earth that deserves to be explored and appreciated.

                                                                                                                                              -

                                                                                                                                              3cee63e6c2
                                                                                                                                              -
                                                                                                                                              -
                                                                                                                                              \ No newline at end of file diff --git a/spaces/linfanluntan/Grounded-SAM/GroundingDINO/groundingdino/models/GroundingDINO/csrc/MsDeformAttn/ms_deform_attn_cpu.cpp b/spaces/linfanluntan/Grounded-SAM/GroundingDINO/groundingdino/models/GroundingDINO/csrc/MsDeformAttn/ms_deform_attn_cpu.cpp deleted file mode 100644 index 551243fdadfd1682b5dc6628623b67a79b3f6c74..0000000000000000000000000000000000000000 --- a/spaces/linfanluntan/Grounded-SAM/GroundingDINO/groundingdino/models/GroundingDINO/csrc/MsDeformAttn/ms_deform_attn_cpu.cpp +++ /dev/null @@ -1,43 +0,0 @@ -/*! -************************************************************************************************** -* Deformable DETR -* Copyright (c) 2020 SenseTime. All Rights Reserved. -* Licensed under the Apache License, Version 2.0 [see LICENSE for details] -************************************************************************************************** -* Modified from https://github.com/chengdazhi/Deformable-Convolution-V2-PyTorch/tree/pytorch_1.0.0 -************************************************************************************************** -*/ - -#include - -#include -#include - -namespace groundingdino { - -at::Tensor -ms_deform_attn_cpu_forward( - const at::Tensor &value, - const at::Tensor &spatial_shapes, - const at::Tensor &level_start_index, - const at::Tensor &sampling_loc, - const at::Tensor &attn_weight, - const int im2col_step) -{ - AT_ERROR("Not implement on cpu"); -} - -std::vector -ms_deform_attn_cpu_backward( - const at::Tensor &value, - const at::Tensor &spatial_shapes, - const at::Tensor &level_start_index, - const at::Tensor &sampling_loc, - const at::Tensor &attn_weight, - const at::Tensor &grad_output, - const int im2col_step) -{ - AT_ERROR("Not implement on cpu"); -} - -} // namespace groundingdino diff --git a/spaces/lllqqq/so-vits-svc-models-pcr/inference/slicer.py b/spaces/lllqqq/so-vits-svc-models-pcr/inference/slicer.py deleted file mode 100644 index b05840bcf6bdced0b6e2adbecb1a1dd5b3dee462..0000000000000000000000000000000000000000 --- a/spaces/lllqqq/so-vits-svc-models-pcr/inference/slicer.py +++ /dev/null @@ -1,142 +0,0 @@ -import librosa -import torch -import torchaudio - - -class Slicer: - def __init__(self, - sr: int, - threshold: float = -40., - min_length: int = 5000, - min_interval: int = 300, - hop_size: int = 20, - max_sil_kept: int = 5000): - if not min_length >= min_interval >= hop_size: - raise ValueError('The following condition must be satisfied: min_length >= min_interval >= hop_size') - if not max_sil_kept >= hop_size: - raise ValueError('The following condition must be satisfied: max_sil_kept >= hop_size') - min_interval = sr * min_interval / 1000 - self.threshold = 10 ** (threshold / 20.) - self.hop_size = round(sr * hop_size / 1000) - self.win_size = min(round(min_interval), 4 * self.hop_size) - self.min_length = round(sr * min_length / 1000 / self.hop_size) - self.min_interval = round(min_interval / self.hop_size) - self.max_sil_kept = round(sr * max_sil_kept / 1000 / self.hop_size) - - def _apply_slice(self, waveform, begin, end): - if len(waveform.shape) > 1: - return waveform[:, begin * self.hop_size: min(waveform.shape[1], end * self.hop_size)] - else: - return waveform[begin * self.hop_size: min(waveform.shape[0], end * self.hop_size)] - - # @timeit - def slice(self, waveform): - if len(waveform.shape) > 1: - samples = librosa.to_mono(waveform) - else: - samples = waveform - if samples.shape[0] <= self.min_length: - return {"0": {"slice": False, "split_time": f"0,{len(waveform)}"}} - rms_list = librosa.feature.rms(y=samples, frame_length=self.win_size, hop_length=self.hop_size).squeeze(0) - sil_tags = [] - silence_start = None - clip_start = 0 - for i, rms in enumerate(rms_list): - # Keep looping while frame is silent. - if rms < self.threshold: - # Record start of silent frames. - if silence_start is None: - silence_start = i - continue - # Keep looping while frame is not silent and silence start has not been recorded. - if silence_start is None: - continue - # Clear recorded silence start if interval is not enough or clip is too short - is_leading_silence = silence_start == 0 and i > self.max_sil_kept - need_slice_middle = i - silence_start >= self.min_interval and i - clip_start >= self.min_length - if not is_leading_silence and not need_slice_middle: - silence_start = None - continue - # Need slicing. Record the range of silent frames to be removed. - if i - silence_start <= self.max_sil_kept: - pos = rms_list[silence_start: i + 1].argmin() + silence_start - if silence_start == 0: - sil_tags.append((0, pos)) - else: - sil_tags.append((pos, pos)) - clip_start = pos - elif i - silence_start <= self.max_sil_kept * 2: - pos = rms_list[i - self.max_sil_kept: silence_start + self.max_sil_kept + 1].argmin() - pos += i - self.max_sil_kept - pos_l = rms_list[silence_start: silence_start + self.max_sil_kept + 1].argmin() + silence_start - pos_r = rms_list[i - self.max_sil_kept: i + 1].argmin() + i - self.max_sil_kept - if silence_start == 0: - sil_tags.append((0, pos_r)) - clip_start = pos_r - else: - sil_tags.append((min(pos_l, pos), max(pos_r, pos))) - clip_start = max(pos_r, pos) - else: - pos_l = rms_list[silence_start: silence_start + self.max_sil_kept + 1].argmin() + silence_start - pos_r = rms_list[i - self.max_sil_kept: i + 1].argmin() + i - self.max_sil_kept - if silence_start == 0: - sil_tags.append((0, pos_r)) - else: - sil_tags.append((pos_l, pos_r)) - clip_start = pos_r - silence_start = None - # Deal with trailing silence. - total_frames = rms_list.shape[0] - if silence_start is not None and total_frames - silence_start >= self.min_interval: - silence_end = min(total_frames, silence_start + self.max_sil_kept) - pos = rms_list[silence_start: silence_end + 1].argmin() + silence_start - sil_tags.append((pos, total_frames + 1)) - # Apply and return slices. - if len(sil_tags) == 0: - return {"0": {"slice": False, "split_time": f"0,{len(waveform)}"}} - else: - chunks = [] - # 第一段静音并非从头开始,补上有声片段 - if sil_tags[0][0]: - chunks.append( - {"slice": False, "split_time": f"0,{min(waveform.shape[0], sil_tags[0][0] * self.hop_size)}"}) - for i in range(0, len(sil_tags)): - # 标识有声片段(跳过第一段) - if i: - chunks.append({"slice": False, - "split_time": f"{sil_tags[i - 1][1] * self.hop_size},{min(waveform.shape[0], sil_tags[i][0] * self.hop_size)}"}) - # 标识所有静音片段 - chunks.append({"slice": True, - "split_time": f"{sil_tags[i][0] * self.hop_size},{min(waveform.shape[0], sil_tags[i][1] * self.hop_size)}"}) - # 最后一段静音并非结尾,补上结尾片段 - if sil_tags[-1][1] * self.hop_size < len(waveform): - chunks.append({"slice": False, "split_time": f"{sil_tags[-1][1] * self.hop_size},{len(waveform)}"}) - chunk_dict = {} - for i in range(len(chunks)): - chunk_dict[str(i)] = chunks[i] - return chunk_dict - - -def cut(audio_path, db_thresh=-30, min_len=5000): - audio, sr = librosa.load(audio_path, sr=None) - slicer = Slicer( - sr=sr, - threshold=db_thresh, - min_length=min_len - ) - chunks = slicer.slice(audio) - return chunks - - -def chunks2audio(audio_path, chunks): - chunks = dict(chunks) - audio, sr = torchaudio.load(audio_path) - if len(audio.shape) == 2 and audio.shape[1] >= 2: - audio = torch.mean(audio, dim=0).unsqueeze(0) - audio = audio.cpu().numpy()[0] - result = [] - for k, v in chunks.items(): - tag = v["split_time"].split(",") - if tag[0] != tag[1]: - result.append((v["slice"], audio[int(tag[0]):int(tag[1])])) - return result, sr diff --git a/spaces/lojban/text-to-speech/libs/audio.py b/spaces/lojban/text-to-speech/libs/audio.py deleted file mode 100644 index 2559bc999888c739079586000e1951afbeda3f68..0000000000000000000000000000000000000000 --- a/spaces/lojban/text-to-speech/libs/audio.py +++ /dev/null @@ -1,55 +0,0 @@ -import numpy as np -import pydub -from re import sub - -def float2pcm(sig, dtype='int16'): - """Convert floating point signal with a range from -1 to 1 to PCM. - Any signal values outside the interval [-1.0, 1.0) are clipped. - No dithering is used. - Note that there are different possibilities for scaling floating - point numbers to PCM numbers, this function implements just one of - them. For an overview of alternatives see - http://blog.bjornroche.com/2009/12/int-float-int-its-jungle-out-there.html - Parameters - ---------- - sig : array_like - Input array, must have floating point type. - dtype : data type, optional - Desired (integer) data type. - Returns - ------- - numpy.ndarray - Integer data, scaled and clipped to the range of the given - *dtype*. - See Also - -------- - pcm2float, dtype - """ - sig = np.asarray(sig) - if sig.dtype.kind != 'f': - raise TypeError("'sig' must be a float array") - dtype = np.dtype(dtype) - if dtype.kind not in 'iu': - raise TypeError("'dtype' must be an integer type") - - i = np.iinfo(dtype) - abs_max = 2 ** (i.bits - 1) - offset = i.min + abs_max - return (sig * abs_max + offset).clip(i.min, i.max).astype(dtype) - -def strip_text(text: str) -> str: - return sub(r"[^a-zA-Z0-9 ]", "", text) - -def wav2ogg(x, sr, text, language, normalized=True): - print(x,sr,text,language) - """numpy array to MP3""" - channels = 2 if (x.ndim == 2 and x.shape[1] == 2) else 1 - if normalized: # normalized array - each item should be a float in [-1, 1) - y = np.int16(x * 2 ** 15) - else: - y = np.int16(x) - song = pydub.AudioSegment(y.tobytes(), frame_rate=sr, sample_width=2, channels=channels) - path = f"/tmp/{language}-{strip_text(text)}.ogg" - song.export(path, format="ogg", codec="libvorbis") - # samples = song.get_array_of_samples() - return path # np.array(samples) diff --git a/spaces/loss4Wang/architecture_styles/README.md b/spaces/loss4Wang/architecture_styles/README.md deleted file mode 100644 index fe1efc963bd1cac3fd33f3bd08c1962d986d2005..0000000000000000000000000000000000000000 --- a/spaces/loss4Wang/architecture_styles/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Architecture Styles -emoji: 😻 -colorFrom: green -colorTo: indigo -sdk: gradio -sdk_version: 3.1.6 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/ltgoslo/ssa-perin/data/field/mini_torchtext/field.py b/spaces/ltgoslo/ssa-perin/data/field/mini_torchtext/field.py deleted file mode 100644 index b3d36113d52a0da3b2547cd59d09bae893248ba1..0000000000000000000000000000000000000000 --- a/spaces/ltgoslo/ssa-perin/data/field/mini_torchtext/field.py +++ /dev/null @@ -1,637 +0,0 @@ -# coding: utf8 -from collections import Counter, OrderedDict -from itertools import chain -import six -import torch - -from .pipeline import Pipeline -from .utils import get_tokenizer, dtype_to_attr, is_tokenizer_serializable -from .vocab import Vocab - - -class RawField(object): - """ Defines a general datatype. - - Every dataset consists of one or more types of data. For instance, a text - classification dataset contains sentences and their classes, while a - machine translation dataset contains paired examples of text in two - languages. Each of these types of data is represented by a RawField object. - A RawField object does not assume any property of the data type and - it holds parameters relating to how a datatype should be processed. - - Attributes: - preprocessing: The Pipeline that will be applied to examples - using this field before creating an example. - Default: None. - postprocessing: A Pipeline that will be applied to a list of examples - using this field before assigning to a batch. - Function signature: (batch(list)) -> object - Default: None. - is_target: Whether this field is a target variable. - Affects iteration over batches. Default: False - """ - - def __init__(self, preprocessing=None, postprocessing=None, is_target=False): - self.preprocessing = preprocessing - self.postprocessing = postprocessing - self.is_target = is_target - - def preprocess(self, x): - """ Preprocess an example if the `preprocessing` Pipeline is provided. """ - if hasattr(self, "preprocessing") and self.preprocessing is not None: - return self.preprocessing(x) - else: - return x - - def process(self, batch, *args, **kwargs): - """ Process a list of examples to create a batch. - - Postprocess the batch with user-provided Pipeline. - - Args: - batch (list(object)): A list of object from a batch of examples. - Returns: - object: Processed object given the input and custom - postprocessing Pipeline. - """ - if self.postprocessing is not None: - batch = self.postprocessing(batch) - return batch - - -class Field(RawField): - """Defines a datatype together with instructions for converting to Tensor. - - Field class models common text processing datatypes that can be represented - by tensors. It holds a Vocab object that defines the set of possible values - for elements of the field and their corresponding numerical representations. - The Field object also holds other parameters relating to how a datatype - should be numericalized, such as a tokenization method and the kind of - Tensor that should be produced. - - If a Field is shared between two columns in a dataset (e.g., question and - answer in a QA dataset), then they will have a shared vocabulary. - - Attributes: - sequential: Whether the datatype represents sequential data. If False, - no tokenization is applied. Default: True. - use_vocab: Whether to use a Vocab object. If False, the data in this - field should already be numerical. Default: True. - init_token: A token that will be prepended to every example using this - field, or None for no initial token. Default: None. - eos_token: A token that will be appended to every example using this - field, or None for no end-of-sentence token. Default: None. - fix_length: A fixed length that all examples using this field will be - padded to, or None for flexible sequence lengths. Default: None. - dtype: The torch.dtype class that represents a batch of examples - of this kind of data. Default: torch.long. - preprocessing: The Pipeline that will be applied to examples - using this field after tokenizing but before numericalizing. Many - Datasets replace this attribute with a custom preprocessor. - Default: None. - postprocessing: A Pipeline that will be applied to examples using - this field after numericalizing but before the numbers are turned - into a Tensor. The pipeline function takes the batch as a list, and - the field's Vocab. - Default: None. - lower: Whether to lowercase the text in this field. Default: False. - tokenize: The function used to tokenize strings using this field into - sequential examples. If "spacy", the SpaCy tokenizer is - used. If a non-serializable function is passed as an argument, - the field will not be able to be serialized. Default: string.split. - tokenizer_language: The language of the tokenizer to be constructed. - Various languages currently supported only in SpaCy. - include_lengths: Whether to return a tuple of a padded minibatch and - a list containing the lengths of each examples, or just a padded - minibatch. Default: False. - batch_first: Whether to produce tensors with the batch dimension first. - Default: False. - pad_token: The string token used as padding. Default: "". - unk_token: The string token used to represent OOV words. Default: "". - pad_first: Do the padding of the sequence at the beginning. Default: False. - truncate_first: Do the truncating of the sequence at the beginning. Default: False - stop_words: Tokens to discard during the preprocessing step. Default: None - is_target: Whether this field is a target variable. - Affects iteration over batches. Default: False - """ - - vocab_cls = Vocab - # Dictionary mapping PyTorch tensor dtypes to the appropriate Python - # numeric type. - dtypes = { - torch.float32: float, - torch.float: float, - torch.float64: float, - torch.double: float, - torch.float16: float, - torch.half: float, - - torch.uint8: int, - torch.int8: int, - torch.int16: int, - torch.short: int, - torch.int32: int, - torch.int: int, - torch.int64: int, - torch.long: int, - } - - ignore = ['dtype', 'tokenize'] - - def __init__(self, sequential=True, use_vocab=True, init_token=None, - eos_token=None, fix_length=None, dtype=torch.long, - preprocessing=None, postprocessing=None, lower=False, - tokenize=None, tokenizer_language='en', include_lengths=False, - batch_first=False, pad_token="", unk_token="", - pad_first=False, truncate_first=False, stop_words=None, - is_target=False): - self.sequential = sequential - self.use_vocab = use_vocab - self.init_token = init_token - self.eos_token = eos_token - self.unk_token = unk_token - self.fix_length = fix_length - self.dtype = dtype - self.preprocessing = preprocessing - self.postprocessing = postprocessing - self.lower = lower - # store params to construct tokenizer for serialization - # in case the tokenizer isn't picklable (e.g. spacy) - self.tokenizer_args = (tokenize, tokenizer_language) - self.tokenize = get_tokenizer(tokenize, tokenizer_language) - self.include_lengths = include_lengths - self.batch_first = batch_first - self.pad_token = pad_token if self.sequential else None - self.pad_first = pad_first - self.truncate_first = truncate_first - try: - self.stop_words = set(stop_words) if stop_words is not None else None - except TypeError: - raise ValueError("Stop words must be convertible to a set") - self.is_target = is_target - - def __getstate__(self): - str_type = dtype_to_attr(self.dtype) - if is_tokenizer_serializable(*self.tokenizer_args): - tokenize = self.tokenize - else: - # signal to restore in `__setstate__` - tokenize = None - attrs = {k: v for k, v in self.__dict__.items() if k not in self.ignore} - attrs['dtype'] = str_type - attrs['tokenize'] = tokenize - - return attrs - - def __setstate__(self, state): - state['dtype'] = getattr(torch, state['dtype']) - if not state['tokenize']: - state['tokenize'] = get_tokenizer(*state['tokenizer_args']) - self.__dict__.update(state) - - def __hash__(self): - # we don't expect this to be called often - return 42 - - def __eq__(self, other): - if not isinstance(other, RawField): - return False - - return self.__dict__ == other.__dict__ - - def preprocess(self, x): - """Load a single example using this field, tokenizing if necessary. - - If the input is a Python 2 `str`, it will be converted to Unicode - first. If `sequential=True`, it will be tokenized. Then the input - will be optionally lowercased and passed to the user-provided - `preprocessing` Pipeline.""" - if (six.PY2 and isinstance(x, six.string_types) - and not isinstance(x, six.text_type)): - x = Pipeline(lambda s: six.text_type(s, encoding='utf-8'))(x) - if self.sequential and isinstance(x, six.text_type): - x = self.tokenize(x.rstrip('\n')) - if self.lower: - x = Pipeline(six.text_type.lower)(x) - if self.sequential and self.use_vocab and self.stop_words is not None: - x = [w for w in x if w not in self.stop_words] - if hasattr(self, "preprocessing") and self.preprocessing is not None: - return self.preprocessing(x) - else: - return x - - def process(self, batch, device=None): - """ Process a list of examples to create a torch.Tensor. - - Pad, numericalize, and postprocess a batch and create a tensor. - - Args: - batch (list(object)): A list of object from a batch of examples. - Returns: - torch.autograd.Variable: Processed object given the input - and custom postprocessing Pipeline. - """ - padded = self.pad(batch) - tensor = self.numericalize(padded, device=device) - return tensor - - def pad(self, minibatch): - """Pad a batch of examples using this field. - - Pads to self.fix_length if provided, otherwise pads to the length of - the longest example in the batch. Prepends self.init_token and appends - self.eos_token if those attributes are not None. Returns a tuple of the - padded list and a list containing lengths of each example if - `self.include_lengths` is `True` and `self.sequential` is `True`, else just - returns the padded list. If `self.sequential` is `False`, no padding is applied. - """ - minibatch = list(minibatch) - if not self.sequential: - return minibatch - if self.fix_length is None: - max_len = max(len(x) for x in minibatch) - else: - max_len = self.fix_length + ( - self.init_token, self.eos_token).count(None) - 2 - padded, lengths = [], [] - for x in minibatch: - if self.pad_first: - padded.append( - [self.pad_token] * max(0, max_len - len(x)) - + ([] if self.init_token is None else [self.init_token]) - + list(x[-max_len:] if self.truncate_first else x[:max_len]) - + ([] if self.eos_token is None else [self.eos_token])) - else: - padded.append( - ([] if self.init_token is None else [self.init_token]) - + list(x[-max_len:] if self.truncate_first else x[:max_len]) - + ([] if self.eos_token is None else [self.eos_token]) - + [self.pad_token] * max(0, max_len - len(x))) - lengths.append(len(padded[-1]) - max(0, max_len - len(x))) - if self.include_lengths: - return (padded, lengths) - return padded - - def build_vocab(self, *args, **kwargs): - """Construct the Vocab object for this field from one or more datasets. - - Arguments: - Positional arguments: Dataset objects or other iterable data - sources from which to construct the Vocab object that - represents the set of possible values for this field. If - a Dataset object is provided, all columns corresponding - to this field are used; individual columns can also be - provided directly. - Remaining keyword arguments: Passed to the constructor of Vocab. - """ - counter = Counter() - sources = [] - for arg in args: - sources.append(arg) - for data in sources: - for x in data: - if not self.sequential: - x = [x] - try: - counter.update(x) - except TypeError: - counter.update(chain.from_iterable(x)) - specials = list(OrderedDict.fromkeys( - tok for tok in [self.unk_token, self.pad_token, self.init_token, - self.eos_token] + kwargs.pop('specials', []) - if tok is not None)) - self.vocab = self.vocab_cls(counter, specials=specials, **kwargs) - - def numericalize(self, arr, device=None): - """Turn a batch of examples that use this field into a Variable. - - If the field has include_lengths=True, a tensor of lengths will be - included in the return value. - - Arguments: - arr (List[List[str]], or tuple of (List[List[str]], List[int])): - List of tokenized and padded examples, or tuple of List of - tokenized and padded examples and List of lengths of each - example if self.include_lengths is True. - device (str or torch.device): A string or instance of `torch.device` - specifying which device the Variables are going to be created on. - If left as default, the tensors will be created on cpu. Default: None. - """ - if self.include_lengths and not isinstance(arr, tuple): - raise ValueError("Field has include_lengths set to True, but " - "input data is not a tuple of " - "(data batch, batch lengths).") - if isinstance(arr, tuple): - arr, lengths = arr - lengths = torch.tensor(lengths, dtype=self.dtype, device=device) - - if self.use_vocab: - if self.sequential: - arr = [[self.vocab.stoi[x] for x in ex] for ex in arr] - else: - arr = [self.vocab.stoi[x] for x in arr] - - if self.postprocessing is not None: - arr = self.postprocessing(arr, self.vocab) - else: - if self.dtype not in self.dtypes: - raise ValueError( - "Specified Field dtype {} can not be used with " - "use_vocab=False because we do not know how to numericalize it. " - "Please raise an issue at " - "https://github.com/pytorch/text/issues".format(self.dtype)) - numericalization_func = self.dtypes[self.dtype] - # It doesn't make sense to explicitly coerce to a numeric type if - # the data is sequential, since it's unclear how to coerce padding tokens - # to a numeric type. - if not self.sequential: - arr = [numericalization_func(x) if isinstance(x, six.string_types) - else x for x in arr] - if self.postprocessing is not None: - arr = self.postprocessing(arr, None) - - var = torch.tensor(arr, dtype=self.dtype, device=device) - - if self.sequential and not self.batch_first: - var.t_() - if self.sequential: - var = var.contiguous() - - if self.include_lengths: - return var, lengths - return var - - -class NestedField(Field): - """A nested field. - - A nested field holds another field (called *nesting field*), accepts an untokenized - string or a list string tokens and groups and treats them as one field as described - by the nesting field. Every token will be preprocessed, padded, etc. in the manner - specified by the nesting field. Note that this means a nested field always has - ``sequential=True``. The two fields' vocabularies will be shared. Their - numericalization results will be stacked into a single tensor. And NestedField will - share the same include_lengths with nesting_field, so one shouldn't specify the - include_lengths in the nesting_field. This field is - primarily used to implement character embeddings. See ``tests/data/test_field.py`` - for examples on how to use this field. - - Arguments: - nesting_field (Field): A field contained in this nested field. - use_vocab (bool): Whether to use a Vocab object. If False, the data in this - field should already be numerical. Default: ``True``. - init_token (str): A token that will be prepended to every example using this - field, or None for no initial token. Default: ``None``. - eos_token (str): A token that will be appended to every example using this - field, or None for no end-of-sentence token. Default: ``None``. - fix_length (int): A fixed length that all examples using this field will be - padded to, or ``None`` for flexible sequence lengths. Default: ``None``. - dtype: The torch.dtype class that represents a batch of examples - of this kind of data. Default: ``torch.long``. - preprocessing (Pipeline): The Pipeline that will be applied to examples - using this field after tokenizing but before numericalizing. Many - Datasets replace this attribute with a custom preprocessor. - Default: ``None``. - postprocessing (Pipeline): A Pipeline that will be applied to examples using - this field after numericalizing but before the numbers are turned - into a Tensor. The pipeline function takes the batch as a list, and - the field's Vocab. Default: ``None``. - include_lengths: Whether to return a tuple of a padded minibatch and - a list containing the lengths of each examples, or just a padded - minibatch. Default: False. - tokenize: The function used to tokenize strings using this field into - sequential examples. If "spacy", the SpaCy tokenizer is - used. If a non-serializable function is passed as an argument, - the field will not be able to be serialized. Default: string.split. - tokenizer_language: The language of the tokenizer to be constructed. - Various languages currently supported only in SpaCy. - pad_token (str): The string token used as padding. If ``nesting_field`` is - sequential, this will be set to its ``pad_token``. Default: ``""``. - pad_first (bool): Do the padding of the sequence at the beginning. Default: - ``False``. - """ - - def __init__(self, nesting_field, use_vocab=True, init_token=None, eos_token=None, - fix_length=None, dtype=torch.long, preprocessing=None, - postprocessing=None, tokenize=None, tokenizer_language='en', - include_lengths=False, pad_token='', - pad_first=False, truncate_first=False): - if isinstance(nesting_field, NestedField): - raise ValueError('nesting field must not be another NestedField') - if nesting_field.include_lengths: - raise ValueError('nesting field cannot have include_lengths=True') - - if nesting_field.sequential: - pad_token = nesting_field.pad_token - super(NestedField, self).__init__( - use_vocab=use_vocab, - init_token=init_token, - eos_token=eos_token, - fix_length=fix_length, - dtype=dtype, - preprocessing=preprocessing, - postprocessing=postprocessing, - lower=nesting_field.lower, - tokenize=tokenize, - tokenizer_language=tokenizer_language, - batch_first=True, - pad_token=pad_token, - unk_token=nesting_field.unk_token, - pad_first=pad_first, - truncate_first=truncate_first, - include_lengths=include_lengths - ) - self.nesting_field = nesting_field - # in case the user forget to do that - self.nesting_field.batch_first = True - - def preprocess(self, xs): - """Preprocess a single example. - - Firstly, tokenization and the supplied preprocessing pipeline is applied. Since - this field is always sequential, the result is a list. Then, each element of - the list is preprocessed using ``self.nesting_field.preprocess`` and the resulting - list is returned. - - Arguments: - xs (list or str): The input to preprocess. - - Returns: - list: The preprocessed list. - """ - return [self.nesting_field.preprocess(x) - for x in super(NestedField, self).preprocess(xs)] - - def pad(self, minibatch): - """Pad a batch of examples using this field. - - If ``self.nesting_field.sequential`` is ``False``, each example in the batch must - be a list of string tokens, and pads them as if by a ``Field`` with - ``sequential=True``. Otherwise, each example must be a list of list of tokens. - Using ``self.nesting_field``, pads the list of tokens to - ``self.nesting_field.fix_length`` if provided, or otherwise to the length of the - longest list of tokens in the batch. Next, using this field, pads the result by - filling short examples with ``self.nesting_field.pad_token``. - - Example: - >>> import pprint - >>> pp = pprint.PrettyPrinter(indent=4) - >>> - >>> nesting_field = Field(pad_token='', init_token='', eos_token='') - >>> field = NestedField(nesting_field, init_token='', eos_token='') - >>> minibatch = [ - ... [list('john'), list('loves'), list('mary')], - ... [list('mary'), list('cries')], - ... ] - >>> padded = field.pad(minibatch) - >>> pp.pprint(padded) - [ [ ['', '', '', '', '', '', ''], - ['', 'j', 'o', 'h', 'n', '', ''], - ['', 'l', 'o', 'v', 'e', 's', ''], - ['', 'm', 'a', 'r', 'y', '', ''], - ['', '', '', '', '', '', '']], - [ ['', '', '', '', '', '', ''], - ['', 'm', 'a', 'r', 'y', '', ''], - ['', 'c', 'r', 'i', 'e', 's', ''], - ['', '', '', '', '', '', ''], - ['', '', '', '', '', '', '']]] - - Arguments: - minibatch (list): Each element is a list of string if - ``self.nesting_field.sequential`` is ``False``, a list of list of string - otherwise. - - Returns: - list: The padded minibatch. or (padded, sentence_lens, word_lengths) - """ - minibatch = list(minibatch) - if not self.nesting_field.sequential: - return super(NestedField, self).pad(minibatch) - - # Save values of attributes to be monkeypatched - old_pad_token = self.pad_token - old_init_token = self.init_token - old_eos_token = self.eos_token - old_fix_len = self.nesting_field.fix_length - # Monkeypatch the attributes - if self.nesting_field.fix_length is None: - max_len = max(len(xs) for ex in minibatch for xs in ex) - fix_len = max_len + 2 - (self.nesting_field.init_token, - self.nesting_field.eos_token).count(None) - self.nesting_field.fix_length = fix_len - self.pad_token = [self.pad_token] * self.nesting_field.fix_length - if self.init_token is not None: - # self.init_token = self.nesting_field.pad([[self.init_token]])[0] - self.init_token = [self.init_token] - if self.eos_token is not None: - # self.eos_token = self.nesting_field.pad([[self.eos_token]])[0] - self.eos_token = [self.eos_token] - # Do padding - old_include_lengths = self.include_lengths - self.include_lengths = True - self.nesting_field.include_lengths = True - padded, sentence_lengths = super(NestedField, self).pad(minibatch) - padded_with_lengths = [self.nesting_field.pad(ex) for ex in padded] - word_lengths = [] - final_padded = [] - max_sen_len = len(padded[0]) - for (pad, lens), sentence_len in zip(padded_with_lengths, sentence_lengths): - if sentence_len == max_sen_len: - lens = lens - pad = pad - elif self.pad_first: - lens[:(max_sen_len - sentence_len)] = ( - [0] * (max_sen_len - sentence_len)) - pad[:(max_sen_len - sentence_len)] = ( - [self.pad_token] * (max_sen_len - sentence_len)) - else: - lens[-(max_sen_len - sentence_len):] = ( - [0] * (max_sen_len - sentence_len)) - pad[-(max_sen_len - sentence_len):] = ( - [self.pad_token] * (max_sen_len - sentence_len)) - word_lengths.append(lens) - final_padded.append(pad) - padded = final_padded - - # Restore monkeypatched attributes - self.nesting_field.fix_length = old_fix_len - self.pad_token = old_pad_token - self.init_token = old_init_token - self.eos_token = old_eos_token - self.include_lengths = old_include_lengths - if self.include_lengths: - return padded, sentence_lengths, word_lengths - return padded - - def build_vocab(self, *args, **kwargs): - """Construct the Vocab object for nesting field and combine it with this field's vocab. - - Arguments: - Positional arguments: Dataset objects or other iterable data - sources from which to construct the Vocab object that - represents the set of possible values for the nesting field. If - a Dataset object is provided, all columns corresponding - to this field are used; individual columns can also be - provided directly. - Remaining keyword arguments: Passed to the constructor of Vocab. - """ - sources = [] - for arg in args: - sources.append(arg) - - flattened = [] - for source in sources: - flattened.extend(source) - old_vectors = None - old_unk_init = None - old_vectors_cache = None - if "vectors" in kwargs.keys(): - old_vectors = kwargs["vectors"] - kwargs["vectors"] = None - if "unk_init" in kwargs.keys(): - old_unk_init = kwargs["unk_init"] - kwargs["unk_init"] = None - if "vectors_cache" in kwargs.keys(): - old_vectors_cache = kwargs["vectors_cache"] - kwargs["vectors_cache"] = None - # just build vocab and does not load vector - self.nesting_field.build_vocab(*flattened, **kwargs) - super(NestedField, self).build_vocab() - self.vocab.extend(self.nesting_field.vocab) - self.vocab.freqs = self.nesting_field.vocab.freqs.copy() - if old_vectors is not None: - self.vocab.load_vectors(old_vectors, - unk_init=old_unk_init, cache=old_vectors_cache) - - self.nesting_field.vocab = self.vocab - - def numericalize(self, arrs, device=None): - """Convert a padded minibatch into a variable tensor. - - Each item in the minibatch will be numericalized independently and the resulting - tensors will be stacked at the first dimension. - - Arguments: - arr (List[List[str]]): List of tokenized and padded examples. - device (str or torch.device): A string or instance of `torch.device` - specifying which device the Variables are going to be created on. - If left as default, the tensors will be created on cpu. Default: None. - """ - numericalized = [] - self.nesting_field.include_lengths = False - if self.include_lengths: - arrs, sentence_lengths, word_lengths = arrs - - for arr in arrs: - numericalized_ex = self.nesting_field.numericalize( - arr, device=device) - numericalized.append(numericalized_ex) - padded_batch = torch.stack(numericalized) - - self.nesting_field.include_lengths = True - if self.include_lengths: - sentence_lengths = \ - torch.tensor(sentence_lengths, dtype=self.dtype, device=device) - word_lengths = torch.tensor(word_lengths, dtype=self.dtype, device=device) - return (padded_batch, sentence_lengths, word_lengths) - return padded_batch \ No newline at end of file diff --git a/spaces/lthero/ChatGPT-lthero/README.md b/spaces/lthero/ChatGPT-lthero/README.md deleted file mode 100644 index 047e6fe8ece6aa307b7a005be20f566e30d2f741..0000000000000000000000000000000000000000 --- a/spaces/lthero/ChatGPT-lthero/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: ChatGPTWithVoice -emoji: 🐠 -colorFrom: purple -colorTo: purple -sdk: gradio -sdk_version: 3.20.1 -app_file: app.py -pinned: false -duplicated_from: lthero/ChatGPTWithVoice ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/luxuedong/lxd/src/lib/bots/bing/types.ts b/spaces/luxuedong/lxd/src/lib/bots/bing/types.ts deleted file mode 100644 index 5a9813b797d13b592ec17b45cfac4bd46510d883..0000000000000000000000000000000000000000 --- a/spaces/luxuedong/lxd/src/lib/bots/bing/types.ts +++ /dev/null @@ -1,261 +0,0 @@ -export type Author = 'user' | 'system' | 'bot' - -export type BotId = 'bing' - -export enum BingConversationStyle { - Creative = 'Creative', - Balanced = 'Balanced', - Precise = 'Precise' -} - -export enum ErrorCode { - CONVERSATION_LIMIT = 'CONVERSATION_LIMIT', - BING_UNAUTHORIZED = 'BING_UNAUTHORIZED', - BING_IP_FORBIDDEN = 'BING_IP_FORBIDDEN', - BING_TRY_LATER = 'BING_TRY_LATER', - BING_FORBIDDEN = 'BING_FORBIDDEN', - BING_CAPTCHA = 'BING_CAPTCHA', - THROTTLE_LIMIT = 'THROTTLE_LIMIT', - NOTFOUND_ERROR = 'NOT_FOUND_ERROR', - UNKOWN_ERROR = 'UNKOWN_ERROR', - NETWORK_ERROR = 'NETWORK_ERROR', -} - -export class ChatError extends Error { - code: ErrorCode - constructor(message: string, code: ErrorCode) { - super(message) - this.code = code - } -} - -export type ChatMessageModel = { - id: string - author: Author - text: string - error?: ChatError - throttling?: Throttling - sourceAttributions?: SourceAttribution[] - suggestedResponses?: SuggestedResponse[] -} - -export interface ConversationModel { - messages: ChatMessageModel[] -} - -export type Event = - | { - type: 'UPDATE_ANSWER' - data: { - text: string - spokenText?: string - sourceAttributions?: SourceAttribution[] - suggestedResponses?: SuggestedResponse[] - throttling?: Throttling - } - } - | { - type: 'DONE' - } - | { - type: 'ERROR' - error: ChatError - } - -export interface SendMessageParams { - prompt: string - imageUrl?: string - options: T - onEvent: (event: Event) => void - signal?: AbortSignal -} - -export interface ConversationResponse { - conversationId: string - clientId: string - conversationSignature: string - result: { - value: string - message?: string - } -} - -export interface Telemetry { - metrics?: null - startTime: string -} - -export interface ChatUpdateArgument { - messages?: ChatResponseMessage[] - throttling?: Throttling - requestId: string - result: null -} - -export type ChatUpdateCompleteResponse = { - type: 2 - invocationId: string - item: ChatResponseItem -} | { - type: 1 - target: string - arguments: ChatUpdateArgument[] -} | { - type: 3 - invocationId: string -} | { - type: 6 | 7 -} - -export interface ChatRequestResult { - value: string - serviceVersion: string - error?: string -} - -export interface ChatResponseItem { - messages: ChatResponseMessage[] - firstNewMessageIndex: number - suggestedResponses: null - conversationId: string - requestId: string - conversationExpiryTime: string - telemetry: Telemetry - result: ChatRequestResult - throttling: Throttling -} -export enum InvocationEventType { - Invocation = 1, - StreamItem = 2, - Completion = 3, - StreamInvocation = 4, - CancelInvocation = 5, - Ping = 6, - Close = 7, -} - -// https://github.com/bytemate/bingchat-api/blob/main/src/lib.ts - -export interface ConversationInfo { - conversationId: string - clientId: string - conversationSignature: string - invocationId: number - conversationStyle: BingConversationStyle - prompt: string - imageUrl?: string -} - -export interface BingChatResponse { - conversationSignature: string - conversationId: string - clientId: string - invocationId: number - conversationExpiryTime: Date - response: string - details: ChatResponseMessage -} - -export interface Throttling { - maxNumLongDocSummaryUserMessagesInConversation: number - maxNumUserMessagesInConversation: number - numLongDocSummaryUserMessagesInConversation: number - numUserMessagesInConversation: number -} - -export interface ChatResponseMessage { - text: string - spokenText?: string - author: string - createdAt: Date - timestamp: Date - messageId: string - requestId: string - offense: string - adaptiveCards: AdaptiveCard[] - sourceAttributions: SourceAttribution[] - feedback: Feedback - contentOrigin: string - messageType?: string - contentType?: string - privacy: null - suggestedResponses: SuggestedResponse[] -} - -export interface AdaptiveCard { - type: string - version: string - body: Body[] -} - -export interface Body { - type: string - text: string - wrap: boolean - size?: string -} - -export interface Feedback { - tag: null - updatedOn: null - type: string -} - -export interface SourceAttribution { - providerDisplayName: string - seeMoreUrl: string - searchQuery: string -} - -export interface SuggestedResponse { - text: string - author?: Author - createdAt?: Date - timestamp?: Date - messageId?: string - messageType?: string - offense?: string - feedback?: Feedback - contentOrigin?: string - privacy?: null -} - -export interface KBlobRequest { - knowledgeRequest: KnowledgeRequestContext - imageBase64?: string -} - -export interface KBlobResponse { - blobId: string - processedBlobId?: string -} - -export interface KnowledgeRequestContext { - imageInfo: ImageInfo; - knowledgeRequest: KnowledgeRequest; -} - -export interface ImageInfo { - url?: string; -} - -export interface KnowledgeRequest { - invokedSkills: string[]; - subscriptionId: string; - invokedSkillsRequestData: InvokedSkillsRequestData; - convoData: ConvoData; -} - -export interface ConvoData { - convoid: string; - convotone: BingConversationStyle; -} - -export interface InvokedSkillsRequestData { - enableFaceBlur: boolean; -} - -export interface FileItem { - url: string; - status?: 'loading' | 'error' | 'loaded' -} diff --git a/spaces/ma-xu/LIVE/thrust/thrust/system/cuda/detail/core/util.h b/spaces/ma-xu/LIVE/thrust/thrust/system/cuda/detail/core/util.h deleted file mode 100644 index ea4ed6400b1d1070f83994db7c57636f14024d03..0000000000000000000000000000000000000000 --- a/spaces/ma-xu/LIVE/thrust/thrust/system/cuda/detail/core/util.h +++ /dev/null @@ -1,773 +0,0 @@ -/****************************************************************************** - * Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved. - * - * Redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions are met: - * * Redistributions of source code must retain the above copyright - * notice, this list of conditions and the following disclaimer. - * * Redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in the - * documentation and/or other materials provided with the distribution. - * * Neither the name of the NVIDIA CORPORATION nor the - * names of its contributors may be used to endorse or promote products - * derived from this software without specific prior written permission. - * - * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" - * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE - * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE - * ARE DISCLAIMED. IN NO EVENT SHALL NVIDIA CORPORATION BE LIABLE FOR ANY - * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES - * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; - * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND - * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT - * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS - * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - * - ******************************************************************************/ -#pragma once - -#include -#include -#include -#include -#include -#include -#include -#include -#include - -namespace thrust -{ - -namespace cuda_cub { -namespace core { - -#ifdef __NVCOMPILER_CUDA__ -# if (__NVCOMPILER_CUDA_ARCH__ >= 600) -# define THRUST_TUNING_ARCH sm60 -# elif (__NVCOMPILER_CUDA_ARCH__ >= 520) -# define THRUST_TUNING_ARCH sm52 -# elif (__NVCOMPILER_CUDA_ARCH__ >= 350) -# define THRUST_TUNING_ARCH sm35 -# else -# define THRUST_TUNING_ARCH sm30 -# endif -#else -# if (__CUDA_ARCH__ >= 600) -# define THRUST_TUNING_ARCH sm60 -# elif (__CUDA_ARCH__ >= 520) -# define THRUST_TUNING_ARCH sm52 -# elif (__CUDA_ARCH__ >= 350) -# define THRUST_TUNING_ARCH sm35 -# elif (__CUDA_ARCH__ >= 300) -# define THRUST_TUNING_ARCH sm30 -# elif !defined (__CUDA_ARCH__) -# define THRUST_TUNING_ARCH sm30 -# endif -#endif - - // Typelist - a container of types, supports up to 10 types - // -------------------------------------------------------------------------- - - class _; - template - struct typelist; - - // ------------------------------------- - - // supported SM arch - // --------------------- - struct sm30 { enum { ver = 300, warpSize = 32 }; }; - struct sm35 { enum { ver = 350, warpSize = 32 }; }; - struct sm52 { enum { ver = 520, warpSize = 32 }; }; - struct sm60 { enum { ver = 600, warpSize = 32 }; }; - - // list of sm, checked from left to right order - // the rightmost is the lowest sm arch supported - // -------------------------------------------- - typedef typelist sm_list; - - // lowest supported SM arch - // -------------------------------------------------------------------------- - - template - struct lowest_supported_sm_arch_impl; - - template - struct lowest_supported_sm_arch_impl > - : lowest_supported_sm_arch_impl<_0, typelist< _1, _2, _3, _4, _5, _6, _7, _8, _9> > {}; - template - struct lowest_supported_sm_arch_impl > - { - typedef SM type; - }; - - typedef typename lowest_supported_sm_arch_impl<_,sm_list>::type lowest_supported_sm_arch; - - // metafunction to match next viable PtxPlan specialization - // -------------------------------------------------------------------------- - - __THRUST_DEFINE_HAS_NESTED_TYPE(has_tuning_t, tuning) - __THRUST_DEFINE_HAS_NESTED_TYPE(has_type_t, type) - - template